Library / Contact center
Architecture · CDR & reporting

The CDR contains your answer.
But only if you organize it first.

Why most contact center reporting fails at the data organization layer, not the dashboard layer — and the small number of dimensions that turn a million-row table into something a supervisor can actually use.

Every voice platform produces a Call Detail Record. It's not optional, it's not a feature, it's not something the vendor sells you — it's how telephony works. Every connected call, every queued call, every abandoned call, every transferred call writes a row of metadata into a table somewhere. Origin number, destination number, route, queue entry, hold events, agent assignment, disposition, duration, call leg structure. The data is there. It has always been there. For a mid-sized contact center, the CDR table grows by tens of thousands of rows per day; for larger ones, hundreds of thousands.

And almost none of it gets used.

Not because the data is bad. The data is excellent. The reason it doesn't get used is that nobody — supervisor, ops director, CTO — can do anything useful with a million-row flat table. The problem isn't that the CDR doesn't tell you what's happening in your contact center. The problem is that the CDR tells you everything that's happening in your contact center, all at once, in chronological order, with no organizing structure. It's the operational equivalent of being handed a transcript of every conversation in a city for the last month and being asked to find the one where someone got upset.

This piece is about the layer of work that sits between "the CDR exists" and "the dashboard is useful" — the data organization layer. It's the part of contact center reporting that gets the least attention in vendor marketing and accounts for the largest share of whether the reporting actually works.

The default state: a flat table nobody reads

Pick any contact center platform on the market. Open the CDR view. You will see, give or take cosmetic differences, the same thing: a paginated table, sorted by timestamp descending, with columns for caller ID, called number, agent, duration, and disposition. Maybe there's a search box. Maybe there's a date range filter. Maybe you can export to CSV.

That view is functional in the same way a phone book is functional — the information is technically present, but extracting any insight requires you to already know what you're looking for. If a supervisor walks up and asks "are we losing more calls in the morning or the afternoon?" — the flat CDR table cannot answer that. Not because the data isn't there; the data is absolutely there. It's that the data is not organized in any way that maps to the question.

Here's what the raw view typically looks like:

timestamp from to agent duration disposition 2026-04-12 09:47:03 +1614******** +18004****** ag_142 00:04:21 answered 2026-04-12 09:47:11 +1313******** +18004****** ag_087 00:01:02 abandoned 2026-04-12 09:47:14 +18004****** +1248******** ag_201 00:08:55 answered 2026-04-12 09:47:18 +1734******** +18004****** ag_142 00:00:14 abandoned 2026-04-12 09:47:25 +1989******** +18004****** ag_087 00:33:08 answered_after_long_hold 2026-04-12 09:47:31 +1810******** +18004****** ag_165 00:02:18 answered 2026-04-12 09:47:42 +18004****** +1517******** ag_201 00:05:44 no_answer 2026-04-12 09:47:48 +1248******** +18004****** ag_142 00:01:51 answered ... (847,231 more rows for the month)

Eight rows out of nearly a million for the month. That fifth row — the call where someone waited thirty-three minutes before being answered — is the call you'd have wanted the supervisor to see in real time. In the flat table, it's surrounded by 847,000 other rows, scrolling past in chronological order, indistinguishable from anything else. The dashboard built on top of this table will only ever surface what the CDR has been organized to expose.

The four cuts that change everything

What turns this raw stream into something operationally useful isn't a clever dashboard, it isn't AI, and it isn't a real-time alerting engine. It's a small set of dimensional cuts applied before any visualization or aggregation happens. In our deployments, four of these reliably do the heavy lifting:

  • By department or business unit — sales calls vs. support calls vs. billing calls vs. operations calls. Different operational tempo, different acceptable hold times, different staffing models, different business stakes.
  • By disposition — answered, abandoned, voicemail, transferred, no-answer, after-hours, busy. The disposition is what the call actually did, and most platforms throw it onto a pie chart instead of using it as a filter on every other view.
  • By call class — queue vs. inbound vs. outbound — already covered at length in our previous piece on max hold time, but worth reiterating: rolling these together produces meaningless averages. Splitting them produces actionable views.
  • By agent or agent role — individual agents, agent teams, or agent skill groups. The CDR tells you who handled each call; most reporting layers don't fully use that field beyond a per-agent productivity view.

None of this is novel. Every contact center practitioner who's been in the field for more than a few years will tell you that these are the cuts that matter. What's striking is how rarely platforms ship reporting that does all four well — and how often the cuts that are built are siloed into separate reports that can't be combined.

Why combining the cuts is where the value lives

A single dimensional cut is interesting. Combined cuts are where the operational answers live. The questions a contact center supervisor or operations director actually asks are almost always two-dimensional or three-dimensional, and the CDR contains the answer to all of them — but only if the reporting layer lets you cross the dimensions:

  • "Which department has the highest abandonment rate this week?" — department × disposition.
  • "Are sales queue calls in the afternoon being abandoned more than morning calls?" — department × disposition × time-of-day.
  • "Is one specific agent driving our after-call-work time average up?" — agent × disposition × duration.
  • "Which call types are getting transferred most, and to whom?" — disposition × call class × destination.
  • "Are abandons concentrated in one queue, or spread across all of them?" — queue × disposition.

Each of these is a question a working supervisor asks several times a week. Each is unanswerable from a flat CDR. Each is trivial to answer if the data has been organized around the four cuts above and the reporting layer treats them as composable filters rather than separate canned reports.

"The dashboard layer is where contact center reporting gets the marketing budget. The data organization layer is where it gets used or doesn't."

What the organized view looks like

Once the CDR is organized along these dimensions, the supervisor's view stops being a flat table and starts looking like a pivot. Same data, fundamentally different shape:

/ Figure 1 — CDR organized by department × disposition
Department Total Answered Abandoned Voicemail Transferred
Sales — queue 1,847 1,612 142 (7.7%) 61 32
Support — queue 2,310 2,189 58 (2.5%) 41 22
Billing — queue 684 619 39 (5.7%) 18 8
Operations — inbound 412 388 14 (3.4%) 7 3
Sales — outbound 3,944 2,118

The same week of CDR data, organized along two dimensions. Sales has nearly three times the abandonment rate of support — a number that's invisible in any aggregate view. The supervisor now has somewhere specific to look. That's not a fancier chart. That's the same data, organized.

Notice what just happened. The aggregate abandonment rate across all departments is around 4.5% — a number that, on a top-line dashboard, would not raise alarms. But sales-queue abandonment is sitting at 7.7%, which is a meaningfully different operational reality. The aggregate view averaged the problem away. The organized view made it visible. Same CDR, same week, same data — different organization layer.

And critically: the supervisor doesn't need to know what to ask before opening the dashboard. The cuts are pre-built; they pivot interactively. Most platforms ship "the abandonment report" and "the department report" as separate static views — which means a supervisor who wants to see abandonment by department has to mentally cross-reference two reports or wait for IT to build a custom one. Any platform that treats the CDR organization layer as a real architectural concern eliminates that gap entirely.

Why the industry doesn't fix this

None of what's described here is technically hard. Indexing a CDR table by department and disposition is a basic database operation. Computing a pivot is a feature spreadsheets had in the 1990s. The reason most platforms ship reporting that doesn't do this well isn't technical limitation — it's a product decision.

Contact center platforms are sold to executives. Executives buy on demos. Demos look better with twelve canned reports than with one configurable pivot. So most platforms ship twelve canned reports — abandonment report, agent productivity report, queue summary report, hourly call volume report, and so on — each of which presents one view of the data. None of them compose with each other. The CDR underneath supports any cut you want; the reporting layer above only exposes the dozen cuts the product team thought to build.

For the contact center buying it, this looks like a generous feature set during the sales cycle. Three months in, after the supervisor has asked the same "can I see X by Y?" question fifteen times and been told fifteen times that there's no report for that — the sales-cycle feature richness reveals itself as reporting-layer poverty.

What good organization actually requires

Building a CDR organization layer that holds up under real operational use — for a contact center between 50 and 500 seats — comes down to a small number of design decisions. None of them are exotic. All of them tend to be skipped by default platform configurations:

Tag every call leg with department and disposition at the time of the event.

Don't try to derive department from the destination number after the fact. Don't try to infer disposition from duration. Both of these get tagged at the source — by the route configuration for department, by the agent's wrap-up disposition for the call result. Tagging at source is cheap; tagging post-hoc is expensive and lossy.

Treat queue, inbound, and outbound as first-class call classes.

In the CDR schema itself, not just in the dashboard layer. A "call" is not a call — a queue call has different semantics from an inbound call has different semantics from an outbound call. They should be distinguishable at the row level so any aggregation downstream can split them automatically.

Index for the questions, not for the rows.

A CDR table indexed only by timestamp is great for chronological scrolling and useless for anything else. Compound indexes on (department, timestamp), (disposition, timestamp), and (agent, timestamp) make the difference between sub-second pivot queries and dashboards that take 40 seconds to refresh. Most platforms do not ship with these indexes by default because the platform vendor doesn't know which dimensions you'll care about.

Build the reporting layer as composable filters, not canned reports.

Every dimensional cut should be a filter that can stack with every other filter. Department × disposition × agent × time window should be one query, not a custom report. If your reporting layer can't do that, the supervisor will eventually stop using it and start asking IT for ad-hoc CSV exports. Which they will, every Monday morning, until somebody quits.

Make every aggregate drill back to the underlying call legs.

When the dashboard says "142 abandoned calls in sales queue this week," the supervisor should be able to click that number and see the 142 specific call legs — caller, agent, time, queue duration. Without that drill-through, the aggregates are pretty but not investigable. The whole point of organizing the CDR is to make every summary a question you can answer, not just a number you can read.

The CTO question this raises

If you're evaluating a contact center platform — or auditing one you already operate — the test is not "does it have CDR reporting." Every platform has CDR reporting. The test is whether the reporting is built on a CDR that's been organized, or whether it's a flat table with twelve canned reports painted on top. The way to find out is to ask one question during the demo: "Show me abandonment rate, broken down by department, broken down further by time of day, for the last seven days." If that takes more than fifteen seconds to produce, the platform's CDR organization layer is not where it needs to be — and three months after deployment, your supervisors will be running that same query in a spreadsheet because the platform can't.

For mid-market contact centers between 50 and 500 seats, this is the work that determines whether the reporting investment pays off. The dashboards are the visible part. The CDR organization is the load-bearing part. We've spent ten years building the second one for clients who came to us because the first one wasn't working — and almost every time, the fix wasn't the dashboard. The fix was upstream, in how the data was organized before anyone looked at it.

That's the work, and that's why we still build custom reporting layers for every deployment we touch. The platform you're on probably has more in its CDR than you've ever seen surfaced. The question is just whether anyone has organized it for you yet.

/ If this is the kind of work you need

The CDR organization layer is where deployments
actually earn their keep.

EBC designs and operates contact center reporting for organizations between fifty and five hundred seats. Most of the work isn't in the dashboards your supervisors look at — it's in the data organization underneath, which determines whether those dashboards can answer the questions a working supervisor actually asks. If you're scoping a deployment, request a written assessment.

Request an assessment