Ryterr's Quality Scoring: Five Dimensions Explained
Most AI writing tools hand you a draft and consider the job done. You get a wall of text, maybe a readability score in the corner, and the implicit suggestion that "good enough" is your problem to define.
Ryterr doesn't work that way. Every post that runs through the pipeline comes back with a scored audit across five named dimensions. You see a number for each one. You see what dragged it down. You know what to fix before you publish.
This post walks through what each dimension measures, how to read the audit output, and how to use the scoring loop to build a consistent publishing operation rather than a collection of one-off posts.
Why a Single "Quality Score" Tells You Nothing
A score of 74 out of 100 is not actionable. You don't know whether the problem is a thin fact-checking pass, brand voice drift, or structural issues with your subheadings. You can't prioritize a fix you can't name.
Most tools that surface quality signals give you an aggregate. That's a black box. You improve it by guessing, then re-running, then guessing again.
The five-dimension framework exists specifically to solve this. Each dimension is scored independently. A post can score 90 on structure and 55 on fact-checking at the same time, and you know exactly where your next 10 minutes go.
Aggregate scores are a relic of simpler content pipelines. Named dimensions are how you run a serious one.
The Five Dimensions, One at a Time
Each dimension measures something distinct. None of them collapse into each other.
1. Fact-checking density. This measures whether factual claims are backed by real, resolvable citations. A low score here doesn't mean the post is wrong. It means there are claims that aren't sourced, or citations that point to URLs that don't support the stated fact. A high score means every non-obvious claim has a link that goes somewhere real.
2. Brand voice alignment. This measures how closely the draft matches your established brand voice. Low scores show up when the third section of a long post starts sounding like generic SaaS copy, even if the intro nailed the tone. High scores mean the voice holds through the entire draft, not just the parts the writer warmed up on.
3. SEO structure. This measures whether your subheadings are doing real keyword work, whether section length matches search intent, and whether the post is organized for how people actually read on a results page. A low score here often means a subheading is a label ("Introduction") instead of a searchable phrase.
4. Citation quality. Separate from density: this measures whether the citations themselves are trustworthy sources, whether they're current, and whether they're relevant to the specific claim they're attached to. A post can have high citation density and still score low here if it's citing low-authority pages or linking to outdated statistics.
5. Readability and flow. This measures sentence complexity, paragraph length, and whether sections transition cleanly. A low score here doesn't mean the writing is bad. It usually means one section went long and dense while everything else stayed tight. The fix is usually surgical.
The reason independent scoring matters is that your post has different failure modes depending on your topic. A technical post might nail readability and drift on citations. A brand-heavy post might nail voice and miss SEO structure. You can't see that split in an aggregate.

How to Read Your Audit Without Second-Guessing It
The audit output is ordered by severity. Flags that need to be resolved before publishing appear first. Suggestions that improve the post but don't block it appear below those.
There's a meaningful difference between the two categories. A fabricated citation URL is a blocking flag. It can't be published with that in place because it's a factual integrity problem, not a style preference. A subheading that could be more specific is a suggestion. You can publish with it as-is. You'll probably score higher if you tighten it, but it's your call.
Here's what a realistic audit result looks like: a post on SaaS pricing strategy scores 88 on structure, 91 on brand voice, 82 on readability, and 61 on fact-checking. The fact-checking flag identifies two specific claims: one where the cited statistic is from 2019 and the post is presenting it as current, and one where the citation URL resolves to the site's homepage rather than the specific study. You fix those two things. You re-run. The score goes to 84 on fact-checking. You publish.
The "no black box" commitment means every flag has a reason attached to it, not just a red dot. If the audit flags something, you can read why in one sentence. You're not decoding it.
The Audit Loop: From First Draft to Publish-Ready
The actual workflow is short. You run the pipeline, review the audit, use the Improve function on flagged sections, re-score, and publish.
A post that clears the audit in one pass takes the baseline pipeline time: 4-6 minutes. That's research, drafting, image generation, and the initial audit, all in sequence.
A post that needs two rounds adds time. A heavy fact-checking pass on a stats-dense topic takes longer than a pass on an opinion piece. Brand voice corrections on a topic that sits outside your usual content area take longer than corrections on a topic you've covered before. Expect a second pass to add 5-10 minutes in realistic cases.
The compounding effect is real. Your second batch of 10 posts will score higher than your first batch because the brand voice model has more signal. The first post teaches the model what you sound like. The fifth reinforces it. By the tenth, the gap between your first draft and your target score is narrower. You spend less time in the improvement loop because the initial draft is closer to where it needs to be.

Using Scores as a Publishing Standard, Not a Vanity Metric
A threshold is more useful than a goal. Instead of trying to maximize your scores, set a minimum: don't ship below 75 on fact-checking, don't ship below 80 on brand voice. Posts that clear the threshold go out. Posts that don't go back through Improve.
For a solo founder publishing two posts per week, this removes a decision. You're not re-reading every draft and wondering whether it's "good enough." The threshold tells you. You spend your editing time on the posts that need it, not on the posts that already passed.
The counterargument worth taking seriously: won't chasing scores make posts formulaic? No, and here's why. Scores measure craft, not creativity. A post that scores 90 on structure can still be original, opinionated, and specific to your take. Structure means your subheadings are specific, not that they're identical to a template. Brand voice alignment means the post sounds like you, not like everyone else. The audit is checking whether the execution matches the intent, not whether the intent is interesting.
That said, if every post you write starts to sound like it was optimized for scores rather than written for readers, that's a signal to revisit your brand voice inputs, not to abandon the scoring system.

What the Audit Catches That You'd Miss on a Read-Through
A human proofreader catches spelling mistakes and awkward sentences. The audit catches different things.
Citation URLs that resolve to the wrong page. This is common with aggregator sites that reorganize their content. The original URL redirects to a homepage or a 404. The claim still looks cited in the text. The audit catches it because it checks where the URL actually goes, not just whether the URL is formatted correctly.
Brand voice drift in the back half of a long post. This happens more than most writers notice. The intro sounds right. The middle sections drift toward generic. By paragraph 11, the voice has shifted. A read-through often misses this because by that point you're skimming for errors, not listening to tone.
Subheadings that don't match the keyword target. A subheading can be descriptive and still be a wasted SEO signal if it doesn't include the phrase someone would actually search. The audit flags the gap between what the subheading says and what the section is actually about.
A cited stat that links to a 404 is not just a formatting problem. It's a trust signal problem. Readers who click and land on a broken page don't come back. Search engines that crawl broken outbound links assign less authority to the post. The audit catches this because it runs on the full post, not a sample. Paragraph 11 gets the same check as paragraph 1.
That's the real value of the "real citations, no fabricated URLs" standard: it's enforced at audit time, not assumed at draft time.

FAQ
What's the difference between citation density and citation quality?
Citation density measures how many claims are sourced. Citation quality measures how good those sources are. A post can cite every claim and still score low on citation quality if those citations are outdated, low-authority, or pointing to pages that don't support the specific claim. Both dimensions need to be above your publish threshold before the post ships.
Can I customize the publish threshold for different post types?
Yes. A thought leadership post and a data-heavy research post don't have the same natural scores on fact-checking density. Setting different thresholds by post type means you're not holding an opinion piece to the same citation standard as a statistics roundup, and you're not letting a data post slide on citations because "it's technical."
What happens when the Improve function conflicts with my own voice?
Use it as a starting point, not a final answer. The Improve function rewrites based on your brand voice inputs and the flagged dimension. If the rewrite doesn't sound right to you, edit it manually. Your judgment overrides the suggestion. The audit will re-score your manual edit the same way it scores the suggested one.
Does the audit run on the whole post or just flagged sections?
The full post. Every paragraph, every citation, every subheading gets checked. Nothing gets skipped because it appeared late in the draft. The fact that paragraph 11 exists is not a reason to give it less scrutiny than paragraph 2.
How long does it take before the brand voice model learns my style?
You'll see a meaningful improvement by your fifth post. The model has enough signal by then to distinguish your natural sentence rhythm from a generic AI draft. By the tenth post, the gap between first draft and target score on brand voice is typically smaller than it was on your first post. The improvement is gradual, not sudden.
After your next post runs through the pipeline, pull up the audit before you read the draft. Score each dimension first. Then read. You'll edit differently when you know which section is the problem before you open it.
If you haven't run a post through Ryterr yet, the first one takes about five minutes. Start at ryterr.com.




