In the era of big data, where researchers truly need massive amounts of information from many sources, more really is more. Extremely big data sets are needed to test out new academic theories and to replicate the results of already-proposed theories.
So Thursday’s announcement that Yahoo (yhoo) Labs’ is releasing 13.5 terabytes of data culled from 20 million readers of Yahoo News, Finance, Sports, and other sites over four months, was a big deal for academics and big data heads, who will now be able to slice and dice it.
But this data can also bring advantages to mere mortals who don’t care about big data or machine learning, a technology, that enables computers to recognize patterns and use algorithms to “learn” from the data they examine.
For example, research using this data could lead to a news page perfectly tailored to users’ own interests—one that shows their team’s scores and injury reports; reviews of their favorite author’s new book; real estate postings of areas they’re interested in, for example.
As Suju Rajan, Yahoo Labs’ director of research for personalization science, puts it, making content more personally appealing is a good thing.
“I’m in Austin and a Longhorn fan, my husband likes the Houston Rockets. When he goes to Yahoo he wants to see what is most useful to him, I want to see what’s most useful to me,” Rajan said.
While Yahoo News is already somewhat customized, that is based on a combination of user-provided preferences and inferred preferences gleaned from the user’s reading behavior.
Because Yahoo hosts so many big sites (Yahoo News, Sports, Finance, and more) it has lots of content and many users viewing that content—and that’s a valuable combo. Rajan is careful to note that users had to opt-in to participate in the data gathering process and that all personally identifiable information (PII) was stripped out.
Did Yahoo score a touchdown with the NFL stream? Watch:
In her post, Rajan called the data trove the “largest-ever machine learning data set” ever offered to researchers. It comprises 110 billion “events” or records culled from reader interactions with Yahoo sites from February to May 2015.
Yahoo Labs would like for this data set to become the benchmark for gauging the performance of machine learning algorithms going forward, she said.
The data is offered as part of Yahoo Labs’ existing Webscope program, which releases anonymized user data for non-commercial use, according to the post.
Get Data Sheet, Fortune’s technology newsletter.
Companies like Yahoo, Facebook (fb), Google (goog) all collect massive amounts of user data. Being able to claim leadership by providing the biggest-and-best public data at the very least gives Yahoo bragging rights.
Two years ago, for example, Google offered up the GDELT data set comprising a quarter-of-a billion records, to anyone wanting to run queries of it using Google’s BigQuery tool. When that happened Google billed GDELT as the world’s largest data set.
For all its business problems which chief executive Marissa Mayer is trying to solve, Yahoo has always had ambitious and cool technology. For instance, the company contributed mightily to Hadoop, the popular open-source framework for storing and processing distributed data.
Projects and contributions like this data set may be one way for Yahoo prove it still has the wherewithal to do great work.