Building Behavioral Panels That Work
This article is based on a talk delivered at the AEDEMOTV Media Research Conference held in Spain in March 2026.
The Measurement Paradox Nobody Talks About
We tend to obsess over data quality in market research, and not without reason. Survey (declarative) data has well-documented limitations: people don’t recall accurately, they often rationalize after the fact, and they may say what they think you want to hear. Behavioral data, observing what people actually do, is often seen as the logical answer.
But there’s a tension that is rarely acknowledged: the act of measuring with extreme precision can itself introduce “weight.”
Let’s illustrate this idea. Imagine a runner climbing a mountain while wearing a backpack that measures their velocity with perfect precision. GPS, accelerometers, pace counters; the data is exquisite. The problem is that the backpack itself adds weight. The runner is no longer climbing the mountain; they're climbing the mountain plus the instrument. The more precise the measurement apparatus, the heavier the pack, and the further the recorded performance drifts from what would have happened without it.
In behavioral panels, this paradox plays out quietly but with consequential effects. The SDK that passively tracks location also drains battery. The metering software that captures purchase behavior requires system permissions, which makes people uneasy. Each instrument, individually justifiable, collectively adds mass. And there is a threshold, different for every person, beyond which the runner stops and takes the backpack off.
Then layer in the "Observer Effect" (aka the Hawthorne Effect). When people know they are being watched, their behavior shifts, sometimes subtly and sometimes more significantly. An app that pings participants for daily check-ins is not a neutral instrument; it is a persistent cue to being observed that is difficult to ignore. At that point, you are likely no longer capturing natural behavior, but rather behavior under observation. The more problematic aspect is that this distortion rarely signals itself in the dataset; the metrics remain internally consistent, and nothing looks obviously wrong. The bias only becomes visible when outcomes fail to align with reality.
Good measurement, then, has two properties that are in constant tension: it must be lightweight and transparent. Light enough not to distort what it's capturing. Transparent enough to maintain normal behavior. Threading this needle is the actual challenge of building a behavioral panel, and most panels fail at it before they recruit their first member. Here's the uncomfortable takeaway: perfect behavioral measurement is probably unattainable. The goal isn't purity; it's minimizing the distortion while being honest about what remains. The panels that pretend otherwise are the ones that produce the cleanest-looking data and the most misleading insights.
Most Panels Are Built for the Wrong Person
When organizations decide to build a behavioral panel, the first conversations usually go something like this: "Let's add an SDK tracking. And passive metering. And purchase data. And location signals. And social media linkage. And..."This instinct is understandable. Researchers want richness. Clients want depth. The temptation is to stack every possible data source and call it comprehensive. And it's an instinct that usually goes unchecked because the people making these decisions will never experience the panel firsthand. The problem is that none of this conversation centers on the person who actually has to live inside it.
Most panels are architected for researchers. The panelist is an afterthought, a data source to be optimized, a consent checkbox to be cleared, a churn number to be minimized. The result is panels that are technically sophisticated and humanly exhausting. High weight and friction. Low engagement. And a gradual selection bias emerges, as only the most motivated or the least privacy-conscious participants remain.
The counterintuitive truth: the panels with the best data are the ones that prioritize the panelist experience above everything else. This isn't idealism. It's measurement science. If your panel churn is high, your longitudinal data is compromised; if your engaged base skews toward a specific behavioral profile, your representativity is gone. You can deploy the most advanced tracking infrastructure available and still end up with fundamentally flawed data. The technical sophistication becomes a fancy wrap around a weak foundation.
The Three-Part Formula to build a behavioral panel that actually works
After two decades of building and refining panels at Netquest-Bilendi, including a behavioral sub-panel that took years to get right, the underlying logic comes down to three principles. They sound simple, but they’re not easy to execute. Let’s dive into it.
1. Empathy First
Before designing a single touchpoint, ask yourself one question: Would I participate in this panel? Not "would a target respondent tolerate this?", would you? With full knowledge of what's being tracked, how often, and for what compensation? This question is uncomfortable because the honest answer is often no: the app is too intrusive, the value exchange falls short, the check-in frequency is annoying. And if you're honest about why you wouldn't participate, you've just run the most useful product audit.
Empathy-first design means genuinely stress-testing the participant experience before launch, making hard trade-offs between data richness and participant burden. These trade-offs are not comfortable to make in a room full of researchers, but they are necessary. The question is not "what data would be useful to have?" but "what data can we collect without breaking what we're trying to measure?"
There is a phrase that functions well as a standing filter for any new data requirement: Does this earn its weight? If a new tracking feature adds 7% more data richness but increases app battery consumption by 15% and requires an additional permissions dialog, it does not earn its weight. This filter disciplines data collection, placing the person you are trying to understand at the center, prioritizing respondent experience over raw data volume.
2. Transparency Always
Think about what it takes to feed a wild animal from your hand. You cannot rush it. You move slowly, stay low, and make no sudden gestures. You show up at the same time each day with the same food and the same posture. You let the animal set the pace, because any attempt to accelerate on your terms resets weeks of progress. Trust is built through repetition and restraint; the moment you move too fast, it is gone. The animal does not give you a second chance to explain yourself.
Building panelist trust works much the same way; it must be nurtured as an ongoing process rather than treated as a one-time event. (To be clear, I’m not suggesting that the participant is “a wild animal”; rather, the fragile nature of the trust they place in us is comparable.) The instinct when designing a consent flow is to move through it efficiently (e.g., a wall of terms, a checkbox, or a signature). Once "done" and legally covered, the research begins. However, that is not trust-building; that is liability management dressed up as communication, and participants know the difference. In this regard, the "wall of shame" is led by certain "panels" that merely embed SDKs in apps designed for other purposes. They "solve" the problem by operating in a grey zone; they obscure what is actually being tracked and bury disclosures in dense, technical terms and conditions that few participants meaningfully engage with. While this may formally satisfy consent requirements, it arguably falls short of genuine consent; it is misaligned with the spirit, if not the letter, of GDPR. Clients relying on such data should recognize that responsibility does not end at procurement; it extends to them as well.
The foundation of trust is real transparency and real consent; it means explaining in plain language what data is being collected, when, and why. A one-pager written the way you'd explain it to a friend does more for engagement quality than any incentive structure. It changes the participant's mental model from "I'm being monitored" to "I understand what I've agreed to."
This also means providing genuine control rather than the illusion of it. There is a meaningful difference between a consent architecture that offers real opt-out mechanisms and one that technically provides them while making them difficult to find. Participants notice the difference, even if they cannot always articulate it. Trust is earned slowly and lost instantly; a panel that corners people with opaque tracking or buried opt-outs will see the impact on data quality long before they see it in their retention numbers.
3. Value Exchange: A careful balance
While cash is the bluntest instrument available, over-reliance on it is a structural mistake; a well-designed incentive scheme should balance intrinsic and extrinsic motivations, while carefully calibrating payout timing and amounts.
There is extensive literature about how high payout can trigger participant deception about their eligibility, increase pressure to finish surveys -but not necessarily to answer carefully-, and introduce systematic selection bias (reward seekers, people who will tolerate almost anything for points). Therefore, the "pay more, get better data" assumption is far too simplistic. On the other hand, low monetary incentives will stall panelist growth, increase churn, or highly select opinionated participants. Interestingly, fraud exists on both ends of the spectrum: high rewards attract sophisticated bad actors, while low rewards often come with laxer controls that invite "volume-based" fraud.
To build a high-quality panel, we must intentionally create “intrinsic value”. This is the sense of contributing to something meaningful, the feeling of being heard, and the satisfaction of belonging to a community that shapes real-world decisions. People want to feel like participants, not data vendors. The difference between those two self-images is reflected directly in the quality and longevity of the data they provide.
If possible, “closing the loop” is the most effective mechanism: when a participant's data helps shape a product or strategy, you must tell them. Avoid vagueness and offer specificity that feels tangible: "Because of the patterns we observed in your cohort, this brand changed its distribution strategy." This shifts the exchange from transactional (data for money) to relational (contribution for outcome).
In a passive behavioral data collection environment, it is tempting to minimize interaction to avoid the Hawthorne Effect. However, silence can also backfire: without periodic, low-burden "check-ins," participants lose their connection to the project, leading to "ghosting" or uninstalls.
These “micro-interactions,” small moments of acknowledgment, milestone markers, aggregate insights shared back, brief explanations of recent research impact, or event short surveys will keep the relationship alive.
Think of the relationship with a long-term panelist as a plant you're growing from seed. You water it consistently: not in dramatic surges when you remember, but regularly, in measured amounts. Miss a few days, and the damage is invisible at first. Let it go long enough, and no amount of catching up reverses it. Engagement in a behavioral panel compounds in the same way. You trust the process, tend to it quietly, and over months, it becomes something that can hold its own weight.
Examining the Assumptions of the Behavioral Panel Formula
Behavioral panels represent the next frontier of market research for good reason: passive measurement removes the fog of recall bias, and single-source data cuts through the noise of reconciling disparate data streams. However, most behavioral panel strategies rest on underlying assumptions that are rarely examined. Acknowledging these "silent" factors doesn't invalidate the formula, but ignoring them compromises the integrity of research: we want sound data, so we must look closely at what we are taking for granted.
The first assumption is that there are no participants who are "inelastic" (in the economic sense) when it comes to their privacy. The framework above assumes that with enough transparency and empathy, people will naturally respond with trust and sharing. But privacy fatigue is real, and it compounds. There is a non-trivial segment of the population for whom no amount of thoughtful design can overcome the fundamental discomfort of behavioral tracking. They don't trust promises of anonymization; they've been burned before. The "creep factor" of passive monitoring is simply a ceiling they won't cross, regardless of how elegant your consent flow is. A research formula that fails to acknowledge this inelasticity will collapse under a paradigm that is simply not true.
The second assumption is that authentic behavior is fully recoverable. The backpack metaphor has a hard limit: even the lightest possible measurement instrument introduces some distortion. Beyond the friction of the tools, the mere awareness of being in a panel changes people's relationship to their own behavior. They notice things they wouldn't otherwise notice. They make different choices (sometimes more considered, sometimes more performative) because some part of their brain registers the observation. This cannot be fully mitigated; it can only be managed. The honest goal is not pure measurement but minimally distorted measurement, and researchers should be precise about what that distinction means when they present findings.
The third assumption is that managing intrinsic motivation is achievable at scale. While financial incentives remain a necessary fixture, this model emphasizes building engagement through contribution, community, and meaning. This "relational" approach works compellingly at the level of individual interactions. However, whether it can scale to tens of thousands of panelists while maintaining the depth and quality of those relationships remains a significant operational challenge. Can a sense of "belonging" be mass-produced, or does the human element inevitably dilute as the panel grows?
The industry’s reflex is to lean on sophisticated metering software and cutting-edge tech stacks. While these tools matter, they are ultimately downstream of a more fundamental question: Have you earned the right to ask this much of your participants? The "secret formula" for high-quality behavioral data isn't found in better tech or higher monetary incentives; it is found in a shift in disposition.
If you build for the panelist, the data will follow.
Build panels with heart.
Enric have been on the front lines of building Netquest from concept to a 250+ employees venture that disrupted consumer data collection in the Market Research Industry. During this 15-year journey, he had led people on the ground in four different countries (Spain, Brazil, Chile, and the USA).Today, back in his hometown Barcelona, Enric leads Netquest-Bilendi's product portfolio and strategic projects.


