Reporting vs Storytelling
Reporting
Reporting is the systematic process of turning raw data into structured, interpretable information that supports monitoring, decision-making, and accountability.
It focuses on consistency, repeatability, and standardization.
Reporting in practice:
- Collecting data from one or more sources
- Transforming and aggregating data using agreed business rules
- Presenting outputs in standardized formats such as dashboards, tables, PDFs, or spreadsheets
The key word in reporting is standardized.
Storytelling
Storytelling is a communication practice that uses narrative structure to help audiences understand, interpret, and act.
Data storytelling applies storytelling techniques to communicate:
- Insights
- Implications
- Recommended actions
It goes beyond raw data by adding context, judgment, and meaning.
Visualization supports storytelling, but visualization alone is not a story.
Reporting vs Storytelling
| Primary focus |
Communicates the data |
Communicates the insight and action |
| Approach |
Standardized and repeatable |
Customized to audience and situation |
| Audience effort |
Requires data literacy |
Reduces cognitive load for the audience |
| Core skills |
Data accuracy and visualization |
Critical thinking and communication |
| Typical output |
Dashboards, reports, KPIs |
Narratives, recommendations, decisions |
Single Source of Truth (SSOT)
A Single Source of Truth (SSOT) is a practice where one authoritative data source is defined for each metric or domain.
All reports, dashboards, and analyses rely on this source.
In simple terms:
One question → one correct answer
Why SSOT matters:
- Consistency across reports
- Trust in numbers
- Reduced duplicated logic
- Faster, more confident decision-making
- Clear governance and ownership
Types of Bias in Data
Understanding bias is crucial in data analytics, as it affects the reliability and fairness of insights and models.
As junior data analysts, it’s easy to fall into the trap of believing that data is objective, that it represents raw truth, and that it cannot be misinterpreted.
Where Bias Comes From
Data is produced either by humans or by machines and algorithms created by humans.
\[\Downarrow\]
Data inevitably reflects human assumptions, choices, and limitations.
Bias can therefore lead to poor decisions or false beliefs if left unchecked.
Data Ethics and Bias
One of the core principles of data ethics is transparency in how data is:
- Collected
- Sampled
- Processed
- Interpreted
Remember
Bias awareness does not eliminate all issues, but it significantly reduces analytical risk.
Data Coverage & Inclusion Biases
Biases at this stage affect representation.
If present, all downstream analysis is structurally compromised.
Selection Biases:
- Sampling
- Exclusion
- Survivorship
Selection Bias | 1/1
Happens when certain groups are systematically excluded or included due to how data is selected.
Example:
A marketing campaign’s success is evaluated only on customers who opened emails, ignoring those who didn’t → engagement appears artificially high.
How to Avoid?
- Randomize inclusion criteria; avoid convenience filtering (e.g., “openers only”).
- Compare included vs. excluded groups; use propensity scores or re-weighting.
- Expand recruitment channels and reduce barriers to inclusion.
Selection Bias | 2/2
![]()
.
Sampling Bias | 1/2
Occurs when the data collected is not representative of the entire population.
Example:
A telecom company predicts churn using data only from urban customers, ignoring rural ones.
The model will likely perform poorly for rural areas.
How to Avoid?
- Define the target population explicitly and sample across all key segments.
- Use probability sampling where possible; otherwise weight to population benchmarks.
- Monitor sample composition continuously and correct drift.
Sampling Bias | 2/2
![]()
.
Survivorship Bias | 1/2
Focusing only on successful cases while ignoring failures.
Example:
Analyzing only successful marketing campaigns inflates perceived effectiveness.
How to Avoid?
- Track full cohorts, including churned, inactive, or failed cases.
- Report denominators and attrition explicitly.
- Avoid filtering by “survived” outcomes in exploratory analysis.
Survivorship Bias | 2/2
![]()
.
Exclusion Bias
Important variables are mistakenly left out during data collection or preprocessing.
Example:
An e-commerce model excludes device_type (mobile vs. desktop), missing behavior differences that affect conversion.
How to Avoid?
- Map requirements with domain experts; maintain a
"must-have" variable inventory.
- Trace feature lineage; run ablation tests to detect missing signal.
- Iterate collection forms and ETL to capture omitted fields.
Exclusion Bias
![]()
.
Data Collection & Measurement Biases
Biases at this stage affect how data is recorded and reported.
Even with correct population coverage, poor measurement distorts reality.
Measurement Bias
Arises from inaccurate tools or methods used to collect data.
Example:
A survey app records 0 when users skip a question instead of missing, misleading analysts.
How to Avoid?
- Standardize definitions and validation rules; treat missing explicitly.
- Calibrate and test instruments; run overlap periods when switching tools.
- Include automated data-quality checks in ETL.
Recall Bias
Occurs when participants don’t accurately remember past events.
Example:
Respondents under or over report store visits over the past month.
How to Avoid?
- Shorten recall windows.
- Use diaries or passive behavioral data where possible.
- Ask concrete, bounded questions.
Response Bias
Participants give socially desirable or expected answers rather than truthful ones.
Example:
Customers rate satisfaction higher to appear polite.
How to Avoid?
- Use neutral wording and anonymity.
- Prefer behavioral measures over self-reports.
- Include validity checks such as reverse-coded items.
Observer Bias
A researcher’s expectations influence data collection or interpretation.
Example:
An analyst expecting a new ad to perform well emphasizes positive feedback.
How to Avoid?
- Blind analysts to treatment where feasible.
- Use objective scoring rubrics and inter-rater reliability checks.
- Automate extraction or labeling where appropriate.
Analysis & Modeling Biases
Biases at this stage affect interpretation, reasoning, and model learning.
They often amplify earlier data issues.
Confirmation Bias | 1/2
Tendency to favor data that confirms existing beliefs.
Example:
Analyzing only high-discount months to prove discounts improve retention.
How to Avoid?
- Define research questions and success criteria upfront.
- Seek disconfirming evidence.
- Use holdout periods and peer review.
Confirmation Bias | 2/2
![]()
.
Availability Bias
Recent or vivid events are over-weighted in judgment.
Example:
Overestimating plane crash risk after extensive media coverage.
How to Avoid?
- Use base rates and long-term averages.
- Place events in historical context.
Historical Bias
Outdated or biased historical data perpetuates inequalities.
Example:
A credit model trained on biased historical lending decisions disadvantages certain groups.
How to Avoid?
- Audit legacy datasets for representation and proxies.
- Refresh training data and apply time-aware validation.
- Monitor subgroup performance.
Algorithmic Bias
Algorithms learn or amplify biased patterns from data.
Example:
A hiring model trained on biased past decisions favors male applicants.
How to Avoid?
- Remove or regularize proxy features.
- Evaluate subgroup metrics and fairness constraints.
- Retrain using de-biased data or post-processing techniques.
Reporting & Communication Biases
Biases at this stage affect how insights are presented and interpreted.
Reporting Bias
Selective presentation of results that favor a narrative.
Example:
Highlighting CTR improvements while hiding declining customer satisfaction.
How to Avoid?
- Predefine reporting bundles with guardrail metrics.
- Show uncertainty and denominators.
- Publish full results or appendices.
Bias Summary
| Selection Bias |
Non-random inclusion |
Only counting email openers |
| Sampling Bias |
Non-representative sample |
Urban-only churn model |
| Survivorship Bias |
Ignoring failures |
Studying only successful campaigns |
| Exclusion Bias |
Missing variables |
Omitting device type |
| Measurement Bias |
Faulty data recording |
0 instead of missing |
| Recall Bias |
Memory errors |
Self-reported visits |
| Response Bias |
Social desirability |
Inflated satisfaction |
| Observer Bias |
Researcher expectations |
Selective feedback |
| Confirmation Bias |
Favoring expected results |
Ignoring non-discount data |
| Availability Bias |
Recency effects |
Overestimating rare risks |
| Historical Bias |
Biased legacy data |
Credit discrimination |
| Algorithmic Bias |
Model amplification |
Gender bias in hiring |
| Reporting Bias |
Selective reporting |
Hidden negative KPIs |