📖 ~22 min read·Kotler MM + IIM Raipur MM1·Prasad Mali
"Marketing research is all about generating insights. Marketing insights provide diagnostic information about how and why we observe certain effects in the marketplace — and what that means for marketers."
— Kotler, Marketing Management
The Most Expensive Research Failure in Consumer Goods History
When Research Gets the Question Wrong
A cautionary story that every marketer should know before learning a single research method.
Case Study · 1985 · New Coke
200,000 Taste Tests. The Biggest Product Failure of the 20th Century.
In 1985, Coca-Cola conducted the most extensive consumer research campaign in the company's history. They tested the new formula — sweeter, smoother — on 200,000 people across 25 cities. The results were decisive: 55% of blind-tasters preferred the new formula over both the old Coke and Pepsi.
The research was methodologically rigorous. The sample size was enormous. The data pointed in one direction.
Three months after launching New Coke, the company received 400,000 complaint calls and letters. Psychologists were called to help management understand the anger. Protesters held vigils. A man in Seattle formed the Society for the Preservation of the Real Thing. The company pulled New Coke within 79 days — one of the most public product failures of the 20th century.
What went wrong? The research asked the wrong question. Blind taste tests measure one thing: sensory preference in an isolated sip. They cannot measure what a brand means to someone, what role it has played in their life, what they would feel if it disappeared. Coca-Cola asked "which tastes better in isolation?" when the real question was "what does this brand mean to you, and how would you feel if we changed it?" Nobody thought to ask that second question — because nobody thought to frame the research decision correctly in the first place.
The central lesson: The quality of an answer is bounded by the quality of the question. Most research failures are question failures, not methodology failures.
The Foundation
What Marketing Research Actually Is
Not data collection. Not a report. A bridge between the market and the decision-maker.
Marketing research is the function that links the consumer, customer, and public to the marketer through information — by specifying the information required to address a marketing decision, designing the method for collecting it, managing and implementing the data collection, analysing the results, and communicating the findings with their implications. (Kotler, MM)
The word "links" is doing real work in that definition. Research is not a one-time data collection activity. It is a bridge that must be maintained and crossed in both directions — from market to marketer, and from marketer decision back to market reality.
Principle 01
Research reduces uncertainty — it doesn't eliminate it
A company will never have perfect information. The question is whether the cost of gathering additional information is worth the improvement in decision quality. If a product launch has expected payoff of ₹50 crore and research costs ₹2 crore and meaningfully improves your decision — do it. If research costs ₹60 crore, it is not worth it regardless of what it might reveal.
Principle 02
Research produces insights, not answers
Data tells you what is happening. Insights tell you why — and what it means. "28% of surveyed customers mentioned price as a primary factor" is data. "Customers valued feeling like a smart shopper" is an insight. Data becomes insight when it changes how you understand something, and by extension, what you decide to do.
Principle 03
Most companies do too little research — or the wrong kind
They research what they already believe to find confirmation, rather than using research to challenge assumptions. Coca-Cola confirmed their formula preference data but never challenged their assumption that taste was what mattered. Research that only confirms is barely research — it is expensive self-reassurance.
The Marketing Information System — The Broader Context
Formal research is one component of a larger infrastructure: the Marketing Information System (MIS). Before understanding the research process, you need to see where research sits within the full picture of how marketing organisations gather and use information.
01 · Internal Records
What's already inside
Order-to-payment cycleSales velocity, returns, margins by SKU
CRM dataCustomer purchase history, complaints, lifetime value
Inventory & opsStock levels, fulfillment rates, lead times
Pricing & cost dataActual vs. planned, product-level profitability
›
02 · Marketing Intelligence
What's happening outside
Competitor movesPricing changes, new launches, ad spend
Industry trendsAnalyst reports, trade publications, trade shows
Internal records are the cheapest and fastest source of information — and the most underused. Most companies are sitting on a goldmine of data from their own sales systems, CRM records, inventory systems, and customer service logs. Before commissioning any external research, a good marketing team exhausts what they can learn from within.
Marketing intelligence is the systematic collection of publicly available information about the competitive environment. Tom Stemberg, who built Staples, made weekly unannounced visits to competitors' stores, his own stores, and retailers in adjacent categories — specifically looking for what the store was doing right. That is marketing intelligence practice, not formal research. LinkedIn job postings often reveal strategic direction before any press release does.
Marketing research — what this page covers — is the formal, structured process of generating specific insights to inform specific decisions. It is more expensive and more rigorous than the other two, which is why it should be reserved for decisions where the uncertainty and stakes are high enough to justify the investment.
The MIS framing matters because it prevents the common error of going straight to commissioned primary research when the answer already exists in your own data or in publicly available secondary sources. Exhaust internal records and marketing intelligence first. Reserve formal research for the questions they cannot answer.
The Research Process
Six Steps That Must Not Be Rushed
Each step is necessary. Skipping or compressing steps is where most research goes wrong — and most research does go wrong at step one. Click any step to explore it.
Step 01
🎯
Define the Problem
Step 02
🗺️
Develop the Plan
Step 03
📋
Build Instruments
Step 04
👥
Sampling Plan
Step 05
📡
Contact Methods
Step 06
💡
Analyse & Decide
The sequence matters. Companies that start at Step 3 (building a survey questionnaire) without completing Steps 1 and 2 are the ones who end up measuring the wrong thing very accurately. Most research failures trace back to an incomplete or wrong problem definition at Step 1.
Step 02 in Depth · Research Approaches
Five Ways to Collect Primary Data
These five approaches are not interchangeable. Each is designed for a different type of question. Choosing the wrong approach for your question produces data that cannot answer it — no matter how well you execute.
👁️
Approach 01
Observational Research
What people actually do, not what they say they do
Data is gathered by watching customers in their natural environment — shopping, consuming, interacting — without intervention. The power is that it captures real behaviour, which frequently differs from self-reported behaviour.
Ethnographic research immerses researchers into customers' lives to uncover unarticulated desires. ConAgra spent nine months observing families at home and found that popcorn is a "facilitator of interaction" — what people are really buying is a reason to sit together. T-Mobile used tracking software in 1,000 stores to measure how shoppers move through the space and which phones they pick up.
Facilitated discussion with 6–10 carefully selected participants
A professional moderator runs a 1.5–2 hour guided discussion with a small group for a small payment. Marketing managers observe from behind a one-way mirror or via video stream. The value is in the dynamics — hearing how real people talk about a category, what language they use, and what concerns surface that no survey would ever ask about.
Important: Online focus groups have become standard in India since COVID — cheaper, faster, and accessible to Tier-2 and Tier-3 city respondents who would be difficult to include in person.
8 people ≠ your market. Not projectable. Not a launch brief.
📊
Approach 03
Survey Research
The workhorse of quantitative marketing research
Structured questionnaires administered to large samples produce data you can aggregate, compare, and generalise. Best for measuring awareness, attitudes, and stated intentions — and for tracking changes over time in brand perception or category usage.
The critical limitation: surveys measure what people say. People are not always accurate reporters of their own behaviour or motivations. They rationalise and give socially desirable answers. This is why stated preference research (surveys) and revealed preference research (behavioural data) often conflict — and behavioural data usually wins.
Stated ≠ revealed preference. Social desirability bias.
💳
Approach 04
Behavioural Data
The cleanest signal — what customers did with their money and time
Actual transaction and usage data — what customers bought, when, how often they return, how they navigate a website. This is revealed preference: the record of actual choices, not stated intentions.
Zomato knows not just what users ordered but in what sequence they browsed, which restaurants they opened and closed, and how that correlates with reorder rates. The classic: beer and diapers were frequently bought together on Friday evenings at US grocery stores — young fathers on end-of-week runs. That insight came from transaction data, not from any survey question.
Best for
Actual behaviour, purchase patterns, loyalty, basket analysis
Key Limitation
Tells you what happened — not why
🧪
Approach 05
Experimental Research
The only approach that establishes cause and effect
An experiment controls extraneous variables and isolates the impact of one specific change. In marketing, the most common form is the A/B test: one group sees version A, another sees version B, all other factors held constant, and you measure the difference in outcome. Digital platforms have made this standard practice for pricing, messaging, product features, and UI decisions.
Field experiments go further. A retail brand might introduce a new shelf layout in 10 stores while keeping it unchanged in 10 comparable stores, then compare sales. An airline testing price sensitivity for a new service would offer it at one price on certain routes and a different price on comparable routes, then measure adoption.
Best for
Causal questions: "Does doing X cause people to buy more?"
Key Limitation
Requires time, sufficient sample size, and clean variable isolation
Steps 03 & 04 in Depth
Instruments, Sampling & Contact Methods
How you ask is as important as what you ask. And who you ask determines what conclusions you can draw.
Questionnaire Design — Getting the Questions Right
Whatever approach you choose, you need an instrument. Questionnaires are the most common. Closed-ended questions specify all possible answers — easier to tabulate and compare statistically, used for descriptive and causal research. Open-ended questions let respondents answer in their own words — richer and harder to analyse, essential in exploratory research where you want to understand how people think, not just how many think a certain way.
Question design matters enormously — the wording, order, and response format all shape what you get back. Here are three of the most common errors:
✗ Common Mistake
"Don't you think our new packaging is more attractive?"
Leading question — signals the desired answer
"How satisfied are you with the speed and quality of service?"
Double-barrelled — asks two things at once
"How many times per month do you use our app?"
Assumes the respondent uses it at all
✓ Better Version
"How would you rate the attractiveness of the new packaging vs. the old? [1–7 scale]"
Neutral framing — lets the respondent decide
"Please rate service speed [1–7]. Please rate service quality [1–7]."
Separated into two distinct questions
"Do you use any digital payment apps? [Yes/No] → If yes: how many times per week on average?"
Filter question first — no assumed behaviour
Qualitative Measures — When a Questionnaire Is Too Structured
Sometimes what you're looking for can't be captured by pre-set questions. These techniques probe for meaning, not frequency, and are the tools that generate hypotheses that quantitative research then tests.
Word Association
Show a brand name or category term, ask for the first word that comes to mind. The response is unfiltered and often reveals associations a direct question would never surface. Ask "what do you associate with government banks?" — the fastest words are the most deeply embedded associations.
Projective Techniques
Instead of "why do you buy this brand?", ask: "describe a person who buys this brand." People who would never say "I buy this because it makes me feel superior" will enthusiastically describe "the typical buyer" in those terms. Projection bypasses the social desirability filter.
Laddering
A structured sequence of "why?" questions. Start with a product attribute → ask why that matters → why that matters → until you reach a terminal value. Long battery → freedom of movement → I feel in control → I am someone who doesn't depend on external support. That terminal value is what the marketer is actually selling.
Neuromarketing
EEG, fMRI, biometric sensors, and eye-tracking measure responses that respondents cannot accurately self-report — emotional arousal, attention, memory encoding. P&G used EEG on the Pantene brand to understand women's emotional relationships with their hair, revealing triggers that conventional surveys had missed entirely.
The Sampling Plan — Who and How Many
In most research, you cannot study the entire population. You study a sample and infer conclusions about the whole. Three decisions define the sampling plan.
The counterintuitive truth: absolute sample size matters more than the proportion of the total population it represents. A well-designed sample of 1,200 people can accurately represent 1.4 billion — this is how national opinion polls work. The key is probability sampling and sufficient size to achieve statistical significance.
Procedure
What It Means
When to Use
Projectable?
Simple Random
Every member of the population has an equal chance of selection
Population is accessible and homogeneous
YES
Stratified Random
Population divided into strata, random sample from each stratum
When you need representation across all key subgroups
YES
Cluster Sample
Population divided into clusters (e.g. geographies), clusters randomly selected
When reaching individual members is expensive
YES
Convenience Sample
Whoever is easiest to access
Quick exploratory research only
NO
Judgement Sample
Researcher selects respondents based on expert judgement
When you need specific types of respondents
NO
Quota Sample
Pre-specified number from each subgroup, non-random selection within
Representation matters but pure probability sampling is impractical
NO
Online surveys in India systematically miss non-metro, older, and lower-income populations. In a country where 65% of the population is non-metro, online-only research excludes the majority. Always ask: does my contact method create systematic gaps between my sample and my target market?
Contact Methods — The Trade-off Map
Four methods for reaching your sample, each with different speed, richness, and cost profiles. Multi-method approaches are standard in serious research: qualitative in-person for depth, large online survey for scale.
MAIL / EMAIL
Slow · Low richness · Low cost
Reaches populations who won't engage online. Allows time with complex questions. Very low response rates in practice.
ONLINE
Fast · Medium richness · Low cost
Now dominant for urban India. Can embed images, video, branching logic. Severely under-represents rural, older, lower-income segments.
PHONE
Medium speed · Medium richness · Medium cost
Allows real-time clarification. Response rates have fallen sharply with unknown-caller screening. Mainly used for targeted B2B research with known contacts.
IN-PERSON
Slow · Highest richness · High cost
Irreplaceable for complex, emotionally charged topics where depth matters more than scale. Used for IDIs, focus groups, and ethnographic research.
Market Intelligence Tools · Part 1
Conjoint Analysis — What Customers Actually Value
The problem with asking customers what they want: they will tell you they want everything. Conjoint solves this by forcing real trade-offs.
How this works: You'll see 4 pairs of mobile data plans. Each time, pick the one you'd actually buy. Your choices reveal which attributes you value most — even when you don't realise you're making trade-offs. This is the principle behind conjoint analysis. In practice, researchers run this with hundreds of respondents to get statistically reliable importance weights.
Progress — Choice 1 of 4
Your Inferred Attribute Importance
These weights are inferred from your 4 choices — not from asking you directly. That's the point. In a real conjoint study, researchers would run 15–25 choice tasks across 200+ respondents, then analyse which attributes drive decisions at the market level and within specific segments. The tool behind "what sachet size should this shampoo come in?" and "how much does a 'no harmful chemicals' claim change willingness to pay?"
What Conjoint Is Used For
Feature set decisionsWhich features to include in a new product, which to cut — based on actual utility weights, not stated importance.
Optimal pricingWillingness to pay for specific attribute bundles — revealed through choice behaviour, not survey questions.
Segment discoveryDifferent respondents have different preference structures — price-driven vs. quality-driven segments fall out of the data automatically.
Packaging & positioningHow much does adding a "no harmful chemicals" claim change WTP? Does a sachet at ₹2 outperform one at ₹3? These are conjoint questions.
Market Intelligence Tools · Part 1b
Bass Diffusion Model — Forecasting How a Market Grows
Conjoint tells you what customers prefer. Bass tells you how fast they'll actually adopt — and which type of customer drives adoption at each stage.
When a new product enters a market, adoption does not happen all at once. Some people try it early because they're genuinely interested in new things. The rest wait to see what those early users say. The Bass Diffusion Model captures this dynamic mathematically and produces a surprisingly accurate forecast of the adoption curve for new products — even before launch.
Frank Bass introduced the model in 1969. It has since been validated across hundreds of product categories, from colour television in the US to mobile phones in India. The model's accuracy is not because markets are predictable — it's because the underlying human behaviour (innovators pull, imitators follow) is remarkably stable across cultures and categories.
The Three Parameters
p · Coefficient of Innovation
The Innovator Rate
The probability a person who has not yet adopted will adopt due to mass media and external influence alone — regardless of what anyone else has done. Typically very small: 0.01–0.05 for most categories.
0.030
q · Coefficient of Imitation
The Imitator Rate
The probability a non-adopter adopts because of word-of-mouth contact with existing adopters. This is the social contagion factor. Higher q = faster spread once early adopters exist. Typically 0.2–0.6.
0.380
m · Market Potential
Total Addressable Adopters
The total number of people who will eventually adopt — the ceiling. This is your SOM in adoption terms, not revenue. Fixed at 10 million in this simulation. Estimated from TAM/SAM analysis or survey data.
10M
Fixed for this simulation
Try:
Adoption Curve — New Adopters per PeriodPeak: Year 3
Innovators (p · remaining)
Imitators (q · adopters · remaining)
Total new adopters
Yr 3
Peak Year
—
Peak Adopters
—
50% Market by
—
q/p Ratio
Reading the Curve — What It Tells You Strategically
High q/p ratio (imitation-driven)
A high ratio (q/p > 10) means adoption is largely word-of-mouth driven. The curve is steep and peaks early. Marketing spend should front-load on getting innovators to adopt and talk. UPI's spread across India is a textbook high-q case: once urban users adopted, imitation drove rural spread faster than any advertising could.
Low q/p ratio (innovation-driven)
A low ratio means adoption depends primarily on mass media reach — each customer is convinced independently, not by peer influence. The curve is flatter and peaks later. EV scooters in India today look like this: adoption is spreading but social proof is still building. The curve will steepen when q rises as early adopters become visible advocates.
What the peak means for investment
The peak of the new-adopter curve is the inflection point for capacity and inventory decisions. If you under-invest before the peak, a competitor captures the imitator wave. If you over-invest after the peak, you're building capacity for a declining growth rate. Bass gives you the forecast to time this correctly.
What Bass cannot tell you
Bass forecasts the shape of the adoption curve but not the ultimate market size (m). You still need research — surveys, TAM/SAM analysis, analogous product data — to estimate m. And Bass assumes a single homogeneous market: it does not capture segment-by-segment adoption, which is why it is combined with conjoint analysis and segmentation research in practice.
The q/p ratio is the single most useful number from a Bass analysis. q/p > 10 means the product spreads socially — your job is to ignite the network, not to advertise to individuals. q/p < 5 means you're fighting for each adopter independently — awareness-driven marketing stays relevant throughout. Jio's q/p was estimated at over 12 in its first two years, which explains why ₹20,000 crore in network investment paid off: the imitation effect did the marketing.
Market Intelligence Tools · Part 2
Perceptual Mapping — How Customers See the Competitive Landscape
A perceptual map shows where customers place competing brands relative to attributes that drive choice in that category. Hover over each brand to see the insight.
Maruti (Mass value)
Tata Nexon (Centre)
Hyundai Creta (Value-SUV)
Toyota Fortuner (Premium)
Mercedes GLC (Luxury)
What Perceptual Maps Reveal
Where you are
How customers currently perceive your brand. Often not where your positioning strategy intends. The gap between intended and perceived positioning is one of the most common sources of brand strategy failure.
Where competitors cluster
Clustering signals commoditisation — multiple brands fighting for the same position on the same attributes. Spread-out maps signal differentiated competitive landscapes where distinct positions are defended.
Where gaps exist
Spaces where no brand currently sits. Not all gaps are opportunities — some are unoccupied because customers don't value that combination. Genuine gaps represent spaces where a brand could establish distinctive advantage.
Market Size Analysis
TAM, SAM, SOM — Sizing the Opportunity Correctly
Before researching what customers want, you need to understand how large the market opportunity is — and at what level of specificity to measure it.
TAM
₹1.8L Cr
SAM
₹27,000 Cr
SOM
₹1,200 Cr
TAM — Total Addressable Market
"Is this market worth pursuing at all?"
Total demand assuming 100% market share, no competition, no constraints. The theoretical ceiling. For Indian EV scooters: all 8 crore two-wheeler registrations/year × ₹22,500 avg annual cost = ₹1.8 lakh crore.
SAM — Serviceable Addressable Market
"What is the realistic opportunity for our specific offering?"
The portion of TAM you can target given your product, business model, and geographic reach. If your EV requires AC charging and your network covers 12 cities: 1.2 crore urban riders × ₹22,500 = ₹27,000 crore.
SOM — Serviceable Obtainable Market
"What should we actually plan to achieve?"
Realistic near-term capture given competition, brand awareness, and operational limits. After reality-checking against production capacity (50,000 units × ₹2.4L): ₹1,200 crore — supply side, not market size, is the binding constraint.
The production capacity constraint, not the market calculation, becomes the binding limit. This is a common and important finding — the market is not the constraint; your supply side is. TAM/SAM/SOM analysis makes this visible before you commit resources.
Kotler's Market Demand Vocabulary
TAM/SAM/SOM are strategy tools. Kotler's vocabulary below is the analytical layer beneath them — the definitions you need when building a market demand forecast.
Term
Meaning
Strategic Use
Potential Market
Consumers with some interest in the product — regardless of income or access
Sets the outer ceiling. Useful for long-term vision.
Available Market
Consumers with interest, income, AND access
The realistic universe. Start here for SAM calculations.
Target Market
The portion of the Available Market the company decides to pursue
The strategic choice — who to serve and who to ignore.
Market Demand
Total volume bought by a customer group, in a specific period, at a specific marketing spend level
The foundation of revenue forecasting.
Primary Demand
Total volume demanded for a product category
If low, invest in category education — not brand awareness.
Secondary Demand
Total volume demanded for a specific brand within the category
Relevant once category awareness exists. Your brand's share of an existing pie.
Primary vs secondary demand is a strategic fork. A brand entering a well-developed category (e.g., cola) competes for secondary demand — why choose us over Pepsi? A brand entering an underdeveloped category (e.g., early EV market, mental health apps in India) must first build primary demand — convince people the category is worth using before persuading them to choose your brand.
Research Failure · Bookend Case
The Tropicana Lesson, Revisited
We opened with Coca-Cola — research that asked the wrong question. Now the inverse failure: not asking at all.
Case Study · 2009 · Tropicana Packaging Redesign
A ₹500 Crore Sales Collapse in 60 Days
In 2009, Tropicana redesigned its orange juice packaging — dropping the iconic orange-with-a-straw image and replacing it with a glass of poured juice. The redesign came from a leading design agency. No iconic element was removed carelessly. The design team believed they were modernising the brand.
Sales dropped 20% in the two months after launch. Customer complaints flooded in. Tropicana reinstated the old packaging within weeks.
The failure was not that the new design was objectively bad. The failure was that nobody tested it adequately before committing. Tropicana had not asked customers what the orange-with-a-straw meant to them — that it was a shorthand signal for "fresh, real fruit, not from concentrate." Without that icon, customers felt uncertain about what was in the carton. The cognitive uncertainty translated directly into non-purchase.
The inverse of the Coca-Cola lesson: Coca-Cola researched too narrowly — measured taste but missed brand meaning. Tropicana didn't research adequately at all — committed to a major visual identity change without understanding what the existing elements were communicating. Both failures are Step 1 failures: an incomplete problem definition.
Coca-Cola 1985
Problem framed as: "Which formula tastes better in isolation?"
Should have included: "What does Coca-Cola mean to people, and how would losing the original feel?"
Failure type: Researched the wrong question
Tropicana 2009
Problem framed as: "How do we modernise the packaging?"
Should have included: "What does the current packaging communicate, and what happens if we remove any element of it?"
Failure type: Didn't research at all
Research is not expensive. Research failures are. Both Tropicana and Coca-Cola were sophisticated companies with experienced research teams and substantial budgets. The lesson isn't that more research prevents failure — it's that better problem definition does. All the methodology in the world cannot save a poorly framed question.
Test Your Understanding
Three Questions
Not to check recall — to check whether the concepts have actually landed.
Coca-Cola's New Coke research failed because the methodology was flawed.
Select the most accurate statement.
Correct. The methodology — 200,000 blind taste tests across 25 cities — was methodologically rigorous. The sample size was enormous. The failure was at Step 1: the research problem was defined as "which formula tastes better?" when it should have included "what does the Coca-Cola brand mean to people, and how would they feel if it changed?" The method executed flawlessly on the wrong question.
You're launching a new app in a category that barely exists in India. Which research approach should come first?
Think about the sequence: explore → describe → test causality.
Correct. When a category barely exists, you first need exploratory research to understand whether and how the need exists in people's lives — in their language, not yours. A national survey would measure awareness of something people don't yet have concepts for. An A/B pricing test assumes you already know what you're selling and who wants it. Conjoint assumes you have features to trade off. Depth interviews and ethnography come first — to find out what you don't know you don't know.
A Zomato analyst says: "30% of users who open the app between 11 PM and midnight don't place an order." This is best described as:
Apply the data vs. insight distinction.
Correct. "30% of users who open between 11 PM and midnight don't order" is data — a number describing what happened. An insight would be: "Late-night non-converters are primarily browsing for tomorrow's lunch options — they're planning, not ordering. Showing a 'Schedule for tomorrow' nudge to this segment increases their next-day conversion by 18%." That changes how you understand the behaviour and directly implies what to do. Behavioural data is a source — how reliable the source is doesn't determine whether the output is data or insight.
Where This Goes Next
Cross-References & Reflection
Every tool introduced here connects to deeper work. And three questions worth sitting with before you move on.
Connected Topics
🎯
STP — Segmentation, Targeting, Positioning Soon
Perceptual mapping in full depth. Using cluster analysis output from research to define segments. Identifying positioning whitespace.
📊
Marketing Metrics & Analytics Soon
TAM/SAM/SOM in financial modelling. ROMI, CLV, A/B testing at scale, digital analytics frameworks.
🔬
Advanced Marketing Research Soon
Full conjoint study design and analysis. SEM, EFA, CFA. Market basket analysis. Bass diffusion model for forecasting new product adoption.
🧠
Consumer Behaviour — Neuromarketing Soon
The full treatment of how brain science applies to marketing decisions. EEG, fMRI, biometrics in practice. What P&G, Unilever, and Nestlé actually use.
🏥
Services Marketing — SERVQUAL & Gap Model Soon
Measurement tools specific to service quality research. The five gaps, SERVQUAL dimensions, and how to design surveys that actually improve service delivery.
📐
Defining Marketing — Full Module Live
The foundation this page builds on. Core concepts, company orientations, the MIS in its broader context, and the value architecture.
Before You Move On — Think About This
Three questions. Don't research them. Sit with them.
1
Think of a product category you know well — one you've bought recently. What question would you design research around if you were the brand manager? What type of research — exploratory, descriptive, or causal — would you use, and why? What type would give you the wrong answer if used alone?
2
The Tropicana and Coca-Cola failures both happened to sophisticated companies with experienced research teams and substantial budgets. What does that tell you about the relationship between research quality and the quality of the questions asked? Can you think of a recent brand decision in India that looks like one of these failure patterns?
3
Consider your own stated preferences vs. your revealed preferences. Think of something you say you care about in a product category — quality, sustainability, price — and what you actually buy most often. If you ran conjoint analysis on yourself, which attribute would score highest, and does that match what you'd say in a survey?
The Central Idea
"Research is not expensive. Research failures are."
Both Coca-Cola and Tropicana prove this from opposite directions. The discipline of marketing research is not about running studies — it's about asking questions that are worth answering.