Newsletter Source Monitoring Is Five Tasks Pretending to Be One
Newsletter Creation

Newsletter Source Monitoring Is Five Tasks Pretending to Be One

Your newsletter content curation workflow is about to encounter another 850,000 new stories today.

Last Tuesday, somewhere between 60,000 and 80,000 new news stories went live on the global internet during the morning hours alone. By the end of the day, more than 850,000 articles had been published across over 26,000 distinct online publishers, according to a July 2024 audit by Pangram Labs and NewsCatcher that sampled global news output across more than 75,000 sources.

Eight hundred fifty thousand stories. From 26,000 publishers. In one day. In 2024. Imagine what it looks like today.

Your newsletter is going to feature three to five of them.

Read that ratio one more time. Then look at how many browser tabs you have open right now.

Most creators experience this as a single overwhelming task they call “source monitoring.” In structural terms, it is five-layer tasks running in parallel inside the same exhausting hour. The reason every tool you have ever tried works for two months and then quietly stops being opened is that each tool only solves one or two of the five.

Last week, I walked through the 124 hours per year that newsletter operators spend on source monitoring. That post answered the how much question. This one answers the why. Why does this specific task resist every fix? Why does the category of “newsletter curation tools” produce so many half solutions? And what would actually working software for it look like?

The answer starts by admitting that source monitoring is not one job.

What Is the Newsletter Content Curation Workflow Actually Made Of?

In 2023, a research team from the University of Michigan and Microsoft Research interviewed ten journalists who curate newsletters for a paper presented at the ACM Conference on Human Factors in Computing Systems. They watched these creators do their work, took notes on the steps, and mapped the structure of the curation process. Their finding: the workflow has consistent, repeatable stages.

I have reorganized those stages into the five subtasks every newsletter creator actually performs, whether or not they recognize that they are doing them.

Subtask 1: Discovery. Finding out what has been published since the last issue. Scrolling RSS, opening homepages, scanning social, checking Slack channels, monitoring email digests, browsing newsletter archives.

Subtask 2: Aggregation. Bringing the candidate stories into one workspace. Tabs, bookmarks, a “to read” list in Notion, a Slack thread to yourself, and a saved articles folder.

Subtask 3: Filtering. Cutting the obvious noise. Stories that are off topic, already covered, dated, irrelevant to your audience, written badly, or paywalled. Most stories die at this gate.

Subtask 4: Scoring. Ranking the survivors. Which one fits this week’s theme? Which one ties to a story you covered three issues ago? Which one will land best with the specific subscriber base you have built?

Subtask 5: Selection. Picking the final three to five for the issue.

Each subtask draws on a different cognitive muscle. Discovery is mechanical scanning. Aggregation is logistics. Filtering is an intuitive judgment. Scoring is editorial knowledge. Selection is taste.

Source monitoring fails as a single task because it is five tasks running in parallel, and your browser is the only system pretending to manage all of them.

Why Does Every Newsletter Curation Tool Eventually Stop Working?

Look at how the existing tool category maps onto these five subtasks.

RSS readers solve discovery and partial aggregation. Then they stop. They do not filter. They do not score. They do not select. You still do all three by hand, inside the reader, every week.

Browser based “save for later” tools like Pocket, Instapaper, or Readwise Reader solve aggregation. They do not actually solve discovery, because you still have to find the article first to save it. They do not filter beyond letting you tag things. They do not score against your editorial patterns.

Social media is an unstructured discovery. You find stuff, sometimes great stuff, in a stream that has no memory of what you publish. X (Twitter) Lists, mentioned by several journalists in the Atreja study as a manual workaround, help with discovery scope but do nothing for scoring or selection.

AI summarizers and chatbots can be useful at scoring if you give them the right context, but they do not natively know what you have published, so their scoring stays shallow. Drop in a link, get a summary. That is summarization and you do know that curation is nothing like this.

Each tool does one thing well. None of them does the full chain. So creators stitch the chain together themselves, by hand, in their browser, every week.

The most common newsletter source workflow looks like this: open RSS reader, open browser, open Slack, open Twitter, open the homepages of three favorite publishers, open a Notion doc, open a Google Doc draft, open the inbox, open the previous issue for reference. Then start scrolling.

Most creators call that their workflow. In structural terms, it is a panic response that has hardened into a routine.

What Did the Research Actually Find About Newsletter Curators?

The Atreja et al. study makes one specific finding that should change how creators think about their source list.

The team found that journalists curating newsletters used what they called a “long tail curation process.” The majority of curated stories came from a handful of trusted publishers, while most other publishers in the source list contributed only sporadically, sometimes once a quarter, sometimes never. The participants in the study followed dozens or hundreds of sources but actually drew their issue stories from a small core of five to fifteen publishers, with occasional outliers.

A few sources do most of the work. Most sources are dead weight that the creator keeps “just in case.”

The pattern is consistent with how Dan Ni curated TLDR Newsletter, one of the most successful newsletter operators in the industry. According to a published interview, Ni used between 3,000 and 4,000 sources to curate TLDR. Three to four thousand. For a daily newsletter that picks five to seven stories per issue.

Even at that scale, with a system that demonstrably works, the ratio is brutal. You need a giant haystack to find a small number of needles. The needles are not evenly distributed across the haystack. They cluster.

If you have not audited your source list in eighteen months, your haystack has grown. The needles have not kept up.

Most newsletter source libraries are 30 percent productive and 70 percent link decay. You have been auditing the wrong thing.

Why Your Browser Is the Wrong Container for All of This

Most creators run the entire five subtask workflow through their browser tabs. There is a reason. The browser is the only place where Discovery, Aggregation, Filtering, Scoring, and Selection can technically all happen.

There is also a reason this fails.

In 2021, a Carnegie Mellon University research team published the first in-depth study of browser tab usage in over a decade. They found that 25 percent of participants reported having so many tabs open that their browser or computer crashed. They found that 30 percent qualified as “tab hoarders.” They found that 28 percent could not find the page they needed amid the clutter.

The researchers identified what they called the “black hole effect.” It describes the fear that closing a tab means the information is gone forever, even when the page is bookmarked, saved, or trivially refindable. People keep tabs open as anxiety insurance.

Then comes the punch. The CMU paper specifically named the kind of work that produces tab overload: “sense making and decision tasks that require absorbing information from many sources, stitching it together, and coming to a conclusion.”

That is a description of newsletter curation.

Sit with that for a moment.

The primary tool you use for production was built for showing pages, not for managing editorial decisions. You inherited it because nothing better existed.

Why Filtering Is the Real Leverage Point

If you can only fix one subtask in the five-step chain, fix filtering.

2023 systematic review of information overload research published in Frontiers in Psychology examined decades of studies on how knowledge workers cope with too much input. The recurring finding across multiple cited studies is that filtering is the most effective structural intervention. Better tools, better personal habits, better software all converge on the same insight: cutting noise upstream is more powerful than processing volume downstream.

This makes sense when you think about the five subtasks. If filtering removes 80 percent of incoming stories before scoring and selection, the rest of the chain becomes manageable. If filtering does not happen, scoring and selection collapse under volume.

The macro evidence backs this up. The Reuters Institute Digital News Report 2025 found that 40 percent of people worldwide now sometimes or often avoid the news entirely, the highest level the report has ever recorded. Among the reasons cited, 31 percent said they felt overwhelmed by the volume.

Surprising? Probably not, if you have ever tried to keep an RSS reader alive past month two.

If 31 percent of news consumers are overwhelmed by the firehose, the people producing newsletters from that firehose are operating under double the load. Consumers can opt out. Newsletter creators cannot. They have to keep filtering, every week, on a deadline.

Filtering is the lever. Source monitoring is the broken machine that prevents the lever from working.

How Do You Audit Your Newsletter Source Library in Three Steps?

You can diagnose your own source side problem this week. The audit takes about an hour and costs nothing. It also requires nothing other than your past five issues and a spreadsheet.

Step 1: List every source you check. Every RSS feed, every newsletter you read for ideas, every Twitter account, every Slack channel, every favorite publisher’s homepage you visit on Mondays. Aim for completeness. Most creators find their list runs to fifty or more.

Step 2: For each source, count contributions. Open your last five issues. For every story or link you used, mark which source it came from. Then, in your master list, write that count next to the source.

Step 3: Calculate the yield ratio. For each source, divide the number of stories it contributed in five issues by the number of times you checked it in the same period. The result is the yield ratio. A source you check three times per issue, but that contributed two stories in five issues, has a yield ratio of about 0.13. A source you check once per week and that contributed four stories has a yield ratio of 0.80.

What you will find is that a small number of sources, usually five to ten, are doing most of the work. Most of the rest are below 0.10. A meaningful number sits at zero.

Those zero-yield sources are the link decay. They are still on your list because removing them feels risky. You worry that the moment you take a source off the list is the moment it produces the perfect story. So they sit there, contributing nothing but cognitive load every Monday morning.

The audit gives you permission to cut. Cut everything below 0.10. You will feel relief, guaranteed. Your weekly source monitoring time will drop by a third within two weeks. You are not alone in carrying that bloat. Most creators discover the same pattern when they run the numbers.

Where HeyNews Fits Into the Five Subtasks

HeyNews was built specifically against this five-subtask structure, because the founders ran into the same broken chain themselves while operating their own newsletters.

The platform’s source intelligence engine handles Discovery automatically. It extracts every site and feed your past issues have ever referenced, monitors them on a schedule, and pulls in new stories as they publish. There is also a built-in feed discovery tool that handles the “I know this site has an RSS feed somewhere, but I cannot find it” problem that anyone with a manual RSS workflow has encountered.

Aggregation collapses into a single feed. Stories show up in the same workspace where you write your draft. No tab graveyard. No Notion doc to keep separately.

Filtering and Scoring run together through a relevance score generated against your editorial patterns. The system reads your archive, learns which kinds of stories you have historically picked, and ranks incoming stories by how well they match. Stories that score low fall to the bottom. Stories that score high rise.

Selection stays with you. Always. The system surfaces candidates. You decide which three to five make the issue. That distinction matters. The mechanical work is automated. The editorial judgment is not, and should not be.

In a Nutshell

  • Newsletter source monitoring is a five-step workflow, not a single task: Discovery, Aggregation, Filtering, Scoring, Selection. Most existing tools solve one or two of the five and leave the rest to the creator.
  • Peer-reviewed research on newsletter curators (Atreja et al., CHI 2023) confirms a long tail bias: a small handful of sources contribute most of the stories, while most of the source list functions as background noise that the creator keeps “just in case.”
  • Browser tabs are the primary container most creators use for source monitoring, but Carnegie Mellon’s CHI 2021 study shows that tabs are structurally unfit for the sense-making work newsletter curation requires, producing tab overload, decision paralysis, and what the researchers call the “black hole effect.”
  • Filtering is the highest leverage subtask. A 2023 review in Frontiers in Psychology found that filtering noise upstream is the most effective structural intervention against information overload. Most curation tools optimize for the wrong end of the chain.
  • You can run a Source Library Audit this week. List every source you check, count its contributions across your last five issues, calculate the yield ratio, and cut everything below 0.10. Most creators discover their source library is 30 percent productive and 70 percent link decay.

The newsletter content curation workflow is broken because the tool category has been solving the wrong subtask for fifteen years. Discovery is the easy one. Filtering, scoring, and selection are where the time goes.

Audit your source list this week. Cut what does not yield. Spend the hour you save on the part of the workflow that actually requires your brain.

Check how automated source monitoring can help you.

Cagri Sarigoz, Co-founder of HeyNews

Cagri Sarigoz

Co-founder & CEO of HeyNews. Cagri has spent 15+ years in growth and technical marketing, mostly figuring out how to make AI do the tedious parts of content creation so humans can focus on the interesting parts. At HeyNews, he builds the systems that turn RSS feeds, Reddit threads, and blog posts into a newsletter that sounds like you wrote it.

Try HeyNews free for 14 days

Start Free Trial