TableGrade
Civic data that actually makes sense.
Clark County publishes restaurant inspection records publicly. Technically. The raw portal is a flat list of dates, point totals, and violation phrases: technically public, practically opaque. I built TableGrade to fix that. It's live at tablegrade.org and real people use it to decide where to eat.
Visit TableGrade.org →The county portal answers "what happened on this date?" It doesn't answer "should I be concerned?"
You get a flat list. Dates, point totals, violation codes. No grades, no trends, no way to tell if a restaurant is getting better or worse. No way to spot systemic problems across the county. And no plain-English explanation of what any of it means for a normal person who just wants to know whether to trust the sushi place on Fourth Street.
A restaurant might have had one bad inspection three years ago and ten clean ones since. A raw score doesn't tell you whether it's on an improving or declining trajectory. The raw data answers half a question. TableGrade answers the full one.
Seven views. One dataset. Every question answered.
Map and list over 1,793 facilities
The main view is an interactive map of every permitted food facility in Clark County: restaurants, grocery stores, food trucks, schools. Pins are color-coded by score severity, and clusters shade by average score, so you can see at a glance which parts of the county have a concentration of problem facilities. A heat map layer overlays violation density across geography for a broader view of where problems are clustering.
The list view filters by time window, facility type, score band, and any of the 50+ violation categories the county tracks. You can pull up every restaurant inspected in the last 90 days that was cited specifically for improper cold holding, sorted by score. Try doing that on the county's site.
A grading system that actually means something
Designing the letter grade was the most interesting product problem in the whole build. A simple score-to-letter lookup table would have been wrong in ways that would mislead people.
Based on the facility's own average over the trailing 12 months
How the facility stacks up against similar facilities in the same permit category
Added or subtracted based on whether the last three inspections are trending up or down: a heads-up for trouble that hasn't fully materialized yet
Hard overrides prevent gaming: any closure in the past 12 months is an automatic F. Any triggered follow-up inspection caps the grade at D+ regardless of blended score. This isn't just formula work. It's understanding what the data means in the real world and building logic that reflects the actual stakes.
A score chart that earns its space
Every facility page has a score trend chart that does more than most charts bother to do. Every recorded inspection is plotted over time. The filled area under the line shifts color based on the nature of the violations: blue shading means the score is driven by many small, low-risk problems; red shading means a few severe ones. Same total score, very different story. You can see the difference at a glance without reading a single number.
A dashed line shows the county average range for that facility's permit category, so you immediately know whether a score is high or low relative to peers. It is a lot of information in a small space, and it all belongs there.
AI summaries that replace twenty minutes of manual digging
Every facility page has an AI-generated paragraph that translates the full inspection record into plain English. Here's what that looks like in practice.
"This facility scores below 96% of comparable restaurants in the county. It has a persistent pattern of temperature-control failures spanning three years. It appeared to improve in early 2025 when violations shifted toward lower-risk categories, but the most recent inspection suggests it hasn't sustained that correction and is heading in the wrong direction again." — Sample AI summary, sourced directly from inspection data
That paragraph is what a public health official, a concerned diner, or a local reporter would need to spend twenty minutes assembling by hand. The AI produces it per-facility, at scale, and it updates with every new inspection.
This is not a chatbot. It is not a gimmick. It is structured data retrieval, pattern detection, and language generation working together to produce something more useful than the raw data underneath it. That's exactly what I build for companies dealing with operational records, customer data, inspection logs, or financial reports.
Trending
Surfaces facilities with the largest score changes across their last three inspections, split into Most Improved and Most Concerning. Requires at least three data points to filter out noise. Answers "who is getting noticeably worse right now?" — a question the county portal can't touch.
Chronic Offenders
Finds facilities with persistent violation patterns across multiple inspections using a normalized chronic score and peer percentile. Different from sorting by current score. A facility that alternates between bad and adequate inspections might have the worst long-term record in the county while never topping a recency-sorted list.
Inspection Gaps
Identifies facilities whose required re-inspection interval has been exceeded. Some are 250+ days past their mandated schedule. This page does not exist anywhere on the county's site. It requires computing expected re-inspection dates from permit category rules. The county has the raw data. They just haven't built it.
Five things a paying client can ask for.
Data pipelines
Nightly ingestion, normalization, storage, and incremental updates from a live government data source. Same pattern applies to any client with operational data flowing from a source system into an analytics layer.
Algorithm design with real stakes
The grading system required understanding what the data means, what users actually need to know, and how to build something that is fair, hard to game, and legible to a non-expert. That same thinking applies to any scoring model, risk model, or recommendation engine.
AI that earns its place
The per-facility summaries are a production feature serving real users. I know how to scope AI work so it adds genuine value rather than just checking a box. Reducing the cognitive load of interpreting complex structured data is the job. It's not sexy. It works.
Multiple views of one dataset
The same inspection data powers seven distinct views: map exploration, trending, chronic offender detection, inspection gap monitoring, county statistics, facility-level histories, and a searchable list. Each one answers a different question. That's product design, not just engineering.
Transparency as a feature
The site is honest about its data source, its methodology, and its limitations. The About page explains the grading formula in full. This is how I think software should be built, and how I'll build yours.
I built this because the problem was real and the data was there.
Everything you see on that site is something a paying client can ask for. If you're curious what a version of this applied to your operational data might look like, let's talk.