It’s a Tuesday morning in a city he doesn’t live in.
He woke up in a hotel room, connected his phone to the room’s WiFi, and checked his email before his feet hit the floor. His device had already done more than that. Before he opened a single app, its radios had logged onto the local cellular network, a carrier he doesn’t subscribe to, broadcasting its presence through roaming agreements he accepted in a terms of service he never read. His rental car, parked in the garage below, had been doing the same since he picked it up at the airport. Its telematics system noted the time he left the lot, the route he took to the hotel, and the total miles driven. That data belongs to the rental company. And to their partners.
By the time he sat down at the café, his morning had already been documented by systems he never thought about and people he’ll never meet.
She was already there when he arrived.
They ordered coffee. They paid separately, each tapping a card within minutes of the other at the same terminal, at the same address, at the same time. Two transaction records. Two institutions. One moment. The café’s loyalty app, which he downloaded the night before after searching for “quiet coffee shops near me,” noted his first visit and began building a profile. The search that led him there had already done the same.
Outside, a traffic camera recorded the intersection. Inside, the café’s security system logged the room. The young woman behind the counter remembered his face.
At some point during their conversation, his phone buzzed. A family member had posted on social media, something warm, something public. That they’d miss him while he was away. That they were looking forward to having the girls over while he was gone.
He smiled and put his phone face-down on the table.
By the time they left, the morning had produced a quiet inventory: device signals, co-location coordinates, network handshakes, financial transactions, search history, app registrations, camera footage, a social post confirming his absence from home. None of it dramatic. None of it, on its own, particularly meaningful.
But data doesn’t need to be dramatic to be useful. It needs to be consistent. And it needs to be stored.
Because the question was never what happened that Tuesday morning. The question, the one that gets asked weeks, months, or years later, when something else entirely draws someone’s attention, is who was he, where had he been, and who was she?
The answer, it turns out, was already waiting.
Now. Who was he?
A military officer on a temporary assignment. A federal agent meeting a confidential source. A corporate executive negotiating a sensitive acquisition. A foreign national under surveillance. A private citizen with no particular significance to anyone.
The answer doesn’t change what was collected. It only changes who finds it useful.
The Technical Security False Summit
Over the past two decades, the security industry has made genuine progress. Encryption became standard. Multi-factor authentication moved from optional to expected. Operating systems now patch automatically. Devices lock. Networks segment. Developers build with security in mind in ways they simply didn’t before.
This progress is real, and it matters. But it solved a specific version of the problem, the technical version, the one where an adversary needs to breach a system, exploit a vulnerability, or intercept a transmission to learn something useful about you.
That version of the threat hasn’t disappeared. But it has become, in relative terms, the harder path.
While the security industry was hardening endpoints and encrypting traffic, something else was happening in plain sight. The devices people carried, the platforms they used, and the services they consented to, often without reading a word of the agreement, were building something far more revealing than anything a breach could expose. Not a snapshot. A continuous record. Not what you stored. What you did.
A compromised password can be changed. A leaked file can be contained. But a years-long behavioral record, your patterns of movement, your routines, your associations, your deviations, cannot be patched. It already exists. In most cases, it exists in multiple places simultaneously, held by entities with varying degrees of accountability and security of their own.
The perimeter that most security frameworks were built to defend is a logical construct, a boundary between inside and outside, trusted and untrusted. That model assumed the most valuable data lived inside the perimeter. It doesn’t anymore. The most valuable data about you is generated continuously, transmitted willingly, and stored indefinitely by systems you interact with every day because they are useful, because they are convenient, and because opting out has never been made easy enough to be a realistic choice for most people.
The technical controls got better. The behavioral exposure grew. And the gap between those two trajectories is where the real vulnerability lives.
Pattern, Routine, and the Intelligence Value of Ordinary Life
Long before behavioral data became a commercial product, it was a targeting methodology.
In military and intelligence operations, pattern-of-life analysis, or POL, is exactly what it sounds like. It is the systematic observation of an individual’s routines: where they go, when they go there, who they meet, how long they stay, and when they deviate from what they normally do. It is not a single observation. It is the accumulation of observations over time, cross-referenced across multiple collection vectors, until a picture emerges that is detailed enough to act on.
The value of POL analysis isn’t any single data point. It’s the baseline. Once you know what normal looks like for a person, their rhythms, their habits, their patterns of association, anything that falls outside that baseline becomes significant. An unusual contact. A new location. A change in communication frequency. A gap in an otherwise consistent record. In targeting, anomaly detection isn’t a technical function. It’s a human one. The analyst’s job is to understand a subject well enough that deviation registers as signal rather than noise.
What made this methodology effective in operational environments is precisely what makes its civilian equivalent so consequential today: people are creatures of habit, and habits are legible.
The morning coffee at the same café. The commute that follows the same route. The gym visits on the same days each week. The calls home at a predictable hour. The online activity that clusters around certain times of day. Individually, none of these things feels like intelligence. Collectively, they form a profile detailed enough to answer questions the subject never knew were being asked.
And unlike traditional intelligence collection, which required resources, access, and deliberate effort, the behavioral record that most people generate today is produced automatically, continuously, and without any awareness that a record is being created at all.
Consider what a single weekday produces. A smartphone logging location throughout the day. A transit card recording entry and exit times at specific stations. A building access system noting arrivals and departures. A fitness tracker capturing a lunch break walk. A credit card timestamping a purchase at a specific vendor. An email client logging login times and active hours. A social media platform recording what was viewed, for how long, and what prompted engagement.
None of these systems was designed with targeting in mind. They were designed for convenience, for personalization, for operational efficiency. But the data they generate is functionally identical to what an intelligence analyst would collect deliberately. The difference is scale, automation, and the fact that the subject is an active and willing participant in their own documentation.
This is the point that most conversations about digital privacy miss. The threat is not primarily that someone will hack your phone and read your messages. The threat is that your phone, and everything connected to it, has been quietly building a targeting package on your behalf for years, and that package is available, in whole or in part, to anyone with the means and motivation to access it.
That includes foreign intelligence services. It includes criminal organizations. It includes domestic actors whose interests may not align with yours. It includes, in some cases, commercial platforms whose data practices are opaque enough that the downstream recipients of your behavioral record are effectively unknown.
The operational logic is the same regardless of who is asking. Find the baseline. Watch for the anomaly. Follow the deviation.
The only thing that has changed is that today, the baseline builds itself.
AdTech and the Architecture of Behavioral Profiling
To understand how behavioral data became the primary threat surface, you have to understand who perfected its collection first. It wasn’t a foreign intelligence service. It wasn’t a government surveillance program. It was the advertising industry.
The business model that funds most of the internet is simple in concept and extraordinarily sophisticated in execution. Platforms offer free services in exchange for attention, and attention generates data. That data is analyzed, segmented, and sold to advertisers who want to reach specific people at specific moments with specific messages. The more precisely a platform can predict what a person wants, fears, or responds to, the more valuable that person’s attention becomes as a commodity.
To do that with any precision, you need to understand behavior. Not demographics. Not age or income bracket or zip code, though those matter too. You need to know what a person actually does. What they search for at 11pm. What content makes them stop scrolling. What purchases they almost made. What topics they return to repeatedly. What emotional states correlate with what kinds of decisions.
This is the infrastructure that AdTech built over the past two decades, and it is, without exaggeration, the most sophisticated behavioral mapping apparatus in human history. It operates across devices, across platforms, across physical and digital environments. It follows a person from a search query on their laptop to a purchase at a physical store to a conversation topic picked up by a voice-enabled device. It builds psychographic profiles, not just demographic ones, meaning it attempts to model not just who you are but how you think and what drives your decisions.
The data points feeding this system are not fundamentally different from the collection vectors described in the opening scene. Location. Transaction history. Network activity. Search behavior. Social connections. Content consumption. The architecture was built for commerce. But the data it produces is indifferent to its original purpose.
This is where the security community has been slow to reckon with a difficult reality. When people talk about data brokers and AdTech as privacy concerns, the conversation tends to center on targeted advertising as the harm. Annoying, perhaps. Manipulative, arguably. But most people do not experience it as a security threat in any concrete sense. That framing understates the problem significantly.
Data brokers do not only sell to advertisers. They sell to anyone willing to pay. Law enforcement agencies have purchased location data rather than obtain warrants. Insurance companies have used behavioral data to assess risk in ways never disclosed to the people being assessed. And foreign intelligence services, particularly those operating through intermediary commercial entities, have acquired American consumer data through entirely legal channels because the regulatory framework governing its sale has not kept pace with the scale or the stakes of what is being traded. The Brennan Center for Justice has noted that without comprehensive limitations on data transactions, foreign governments can purchase detailed dossiers on American citizens for espionage recruitment or other purposes, unconstrained by U.S. law.
The scope of this problem is not theoretical. In January 2022, the Office of the Director of National Intelligence released a partially declassified report confirming that commercially available data can be readily deanonymized to identify individuals, and that in the wrong hands it creates direct risk of blackmail, stalking, and targeted harassment. The report made a point that deserves to be quoted directly: the government, it noted, would never have been permitted to compel billions of people to carry location tracking devices at all times, to log most of their social interactions, or to keep detailed records of their reading habits – yet smartphones, connected cars, web tracking technologies, and IoT devices achieved exactly that without any government mandate. The behavioral profiles that AdTech generates are detailed enough to identify a person’s political leanings, religious practices, relationship status, financial stress, health concerns, and professional vulnerabilities. They are detailed enough to inform a social engineering approach tailored to a specific individual. They are detailed enough, in the right hands, to serve as the foundation of a targeting package that a state-sponsored intelligence operation would have spent significant resources to build through traditional means.
The person in the opening scene downloaded a café loyalty app because it offered a discount on his next visit. In doing so, he added another node to a behavioral graph that was already extensive. He didn’t make a security decision. He made a convenience decision. And that is precisely the dynamic that makes this threat so difficult to address through technical means alone.
Technology did not create this vulnerability. Behavior did. And behavior is not something you can patch.
When Behavioral Data Becomes a Weapon
Surveillance and targeting are not the end state. They are the enablers of something more consequential.
When behavioral data exists at scale, when millions of individual profiles have been built, refined, and cross-referenced, the capacity it creates goes beyond identifying who someone is or where they have been. It creates the ability to predict how they will respond, and more importantly, to influence what they do next. That shift, from observation to manipulation, is the defining feature of cognitive warfare as it is practiced today.
Cognitive warfare is not a new concept. Psychological operations have been a component of military and intelligence activity for as long as organized conflict has existed. What has changed is the precision. A recent peer-reviewed study published in the RUSI Journal by researchers Bonnie Rushing and Shouhuai Xu offers one of the most rigorous frameworks available for understanding this evolution. Their work draws a critical distinction between traditional influence activities, which primarily sought to shape conscious belief formation through overt messaging and narrative control, and cognitive attacks, which deliberately target subconscious cognitive processes that govern perception, judgment, and action – often at scale, with advanced technology, and with increasing levels of personalization. Traditional influence operations worked at the population level, broadcasting messages designed to shift sentiment broadly, accepting significant noise in the signal because there was no mechanism to target more precisely. The behavioral data ecosystem changed that calculus entirely.
A hyper-detailed behavioral profile tells an adversary not just who a person is but what they are susceptible to. What narratives resonate with them. What fears are active. What relationships are strained. What professional pressures are present. What content they engage with and what they dismiss. This is not a generalized psychological profile. It is an individually tailored map of cognitive vulnerabilities, and it can be acted on with a precision that was operationally impossible even fifteen years ago. Rushing and Xu’s analysis of real-world cognitive attack cases, including Russian influence operations targeting the 2016 U.S. election and the documented use of AI-assisted synthetic media, confirms that nation-state actors have effectively normalized these operations as a standard instrument of conflict below the threshold of conventional kinetic war.
The mechanisms for executing these attacks are widely available. Social media platforms, by design, serve content that provokes engagement, and engagement correlates strongly with emotional activation. An adversary who understands a target’s behavioral profile can introduce content, narratives, or contacts into that person’s information environment calibrated specifically to their known vulnerabilities. This does not require hacking. It does not require direct contact. It requires access to the same platforms and data infrastructure that billions of people use every day.
Stanford researcher Michal Kosinski, whose foundational work on psychographic profiling helped establish the academic basis for this type of targeting, has confirmed that psychological targeting using behavioral data is not only possible but effective as a tool of digital mass persuasion. The commercial deployment of these techniques through platforms like Facebook demonstrated at scale what intelligence services had long understood in theory: that behavioral data, properly analyzed, allows an actor to predict and influence individual decisions with a degree of precision that demographic targeting alone cannot approach.
For high-consequence individuals, the implications are specific and serious. A law enforcement officer whose political views, financial situation, and social relationships are knowable through commercial data is a more targetable asset for corruption or compromise than one whose life remains relatively opaque. A federal employee whose behavioral record reveals a pattern of stress, professional frustration, or ideological drift is a more viable candidate for recruitment or manipulation than one whose profile offers no such leverage. A military contractor whose travel patterns, communications behavior, and personal associations are documented in commercial databases presents a different kind of exposure than the access controls on their facility badge were designed to address.
None of these scenarios requires a sophisticated cyberattack. They require behavioral intelligence, patience, and a target who has never considered themselves a target.
This is the operational reality that the phrase “human is the weakest link” gestures at without fully capturing. The weakness is not simply that people make mistakes or fall for phishing emails. The weakness is that people generate a continuous, detailed, and largely unprotected record of their inner lives through their digital behavior, and that record can be read by actors whose intentions are not benign and whose methods are not detectable by any firewall or endpoint protection platform on the market.
The behavioral layer is not a gap in an otherwise functional security posture. For most individuals and organizations, it is an unrecognized frontier, one that has been actively exploited while the conversation about digital security remained focused almost entirely on the technical domain.
The Gap No Tool Can Close
There is a predictable pattern to how organizations respond when they become aware of the behavioral threat landscape. They buy something.
A new monitoring platform. A threat intelligence subscription. A digital footprint assessment tool. These are not bad investments. Some of them are genuinely useful. But they share a foundational limitation that rarely gets acknowledged in the sales cycle: they address the symptom while leaving the underlying condition untouched.
The reason behavioral data collection is such an effective threat vector is not that people lack the right software. It is that the behaviors generating the exposure are deeply habitual, often invisible to the people producing them, and reinforced daily by platforms and services specifically designed to make them feel natural and frictionless. You cannot install a patch for that. You cannot configure a setting that changes it. The only thing that changes it is awareness followed by deliberate behavioral adjustment, and that is a significantly harder problem than deploying a tool.
Consider how the gap actually manifests in practice. An organization invests in a mobile device management solution and enforces encryption across all company-issued hardware. The policy is sound. The implementation is competent. And then an employee uses their personal phone to send a work-related message through a consumer messaging app because it was faster. Or they log into a work account from a hotel WiFi network without a VPN because they needed to check something quickly. Or they accept a LinkedIn connection request from someone they don’t recognize because the profile looked credible and the mutual connections were real.
None of these are failures of technology. They are failures of habit, and habit is governed by what feels normal, not by what policy documents say. The convenience architecture that AdTech helped build over the past two decades has made frictionless sharing feel like the default state. Anything that introduces friction, a second authentication step, a pause before clicking, a moment of skepticism before accepting a connection, registers as an interruption to a workflow rather than as a security behavior. That psychological reality does not change because an organization purchased a new platform.
This is the dimension that operational security training, when it is done well, is actually trying to address. Not the technical configurations. The mental model. The goal is to shift how a person understands their own digital behavior, to make visible the collection that normally goes unnoticed, and to build habits that reflect an accurate understanding of the exposure that behavior creates.
That is harder than it sounds, for a reason that is worth being direct about. The same cognitive tendencies that make people vulnerable to behavioral data collection, the preference for convenience, the underestimation of low-probability risks, the social instinct to share and connect, are also the tendencies that make security training difficult to translate into lasting behavioral change. A one-hour annual compliance module does not rewire habit. A single briefing on data hygiene does not produce a person who consistently thinks about their digital footprint before they act.
What produces that is repeated exposure to the logic of the threat, applied to realistic scenarios, in a way that makes the abstract concrete. The person in the opening scene did not make a series of reckless decisions. He made a series of completely normal ones. The gap between normal behavior and secure behavior is not ignorance exactly. It is a failure of imagination about what the data produced by normal behavior looks like to someone motivated to use it.
Closing that gap is not a technology problem. It is an education problem, a culture problem, and in high-consequence environments, a leadership problem. The organizations that treat digital behavior as a human discipline rather than an IT function are the ones building genuine resilience. The ones that treat it as a procurement decision are building a more expensive version of the same vulnerability they already had.
The Perimeter Is You
Security culture has spent decades building walls. Firewalls, access controls, encrypted channels, hardened endpoints. The underlying assumption was consistent across all of it: that the threat lived outside, that the goal was to keep it there, and that the boundary between safe and unsafe was a technical one that could be defined, defended, and monitored.
That assumption was never entirely accurate. But it was functional enough for long enough that it became doctrine. And doctrine, once embedded in organizational culture and procurement cycles, is resistant to revision even when the environment it was designed for has fundamentally changed.
The environment has fundamentally changed.
The boundary between inside and outside dissolved gradually, then all at once. Mobile devices carried sensitive contexts into unsecured environments. Cloud infrastructure moved data outside any perimeter that an organization could control. Consumer platforms became professional tools. Personal and professional identities merged into a single digital presence that generates a continuous behavioral record regardless of which role its owner believes they are occupying at any given moment.
What remained after that dissolution was not a smaller perimeter. It was a different kind of perimeter entirely. One that is not defined by network topology or physical access controls. One that is defined by the behavioral habits of the individuals who make up an organization, their awareness of their own exposure, and their willingness to treat their digital behavior as a security discipline rather than a personal matter.
That is an uncomfortable shift for organizations accustomed to solving security problems with technology. Technology is procurable. It is auditable. It produces metrics. Human behavior is none of those things in any straightforward sense. It is variable, context-dependent, and shaped by forces that have nothing to do with security policy. An employee who understands the threat intellectually may still default to convenience under pressure, fatigue, or time constraints. A professional who has been briefed on data hygiene may still hand a significant portion of their behavioral profile to a foreign-owned application because everyone they know uses it.
This is not a criticism of individuals. It is a description of how human cognition operates under the conditions that modern digital life produces. The platforms and services that generate the most behavioral exposure were designed by some of the most sophisticated behavioral scientists and engineers in the world, specifically to make their use feel natural, necessary, and low-stakes. Competing with that through policy documents and annual training modules is not a serious strategy.
A serious strategy starts with an honest assessment of what the behavioral record of an individual or organization actually looks like from the outside. Not from the inside, where intentions and context are visible, but from the outside, where only behavior and pattern are legible. What does the digital footprint of your most sensitive personnel reveal to someone who knows how to read it? What patterns are consistent enough to be predictable? What associations are visible that were never intended to be public? What deviations from baseline have already occurred that could be flagged by anyone paying attention?
Most organizations have never asked those questions in a rigorous way. Most individuals have never been given a framework for asking them about themselves. That is the gap that matters. Not the firewall configuration. Not the endpoint protection suite. The gap between how people understand their own digital behavior and how that behavior appears to someone motivated to exploit it.
The Tuesday morning in the opening scene will happen again today. In a different city, with different people, generating a different set of data points that will be stored somewhere indefinitely, available to whoever has the means and motivation to access them. The person producing that record will not think of themselves as a target. They will think of themselves as someone getting coffee.
The question worth sitting with is not whether you are being observed. In any meaningful sense, the answer to that is already yes. The question is whether the record you are building about yourself, day by day, through choices that feel ordinary and inconsequential, reflects the level of awareness that your role, your relationships, and your responsibilities actually require.
The perimeter is you. It always was.
Keola Rogers is the founder of CohēCiv LLC, a digital threat awareness and operational security consultancy. This article is part of an ongoing series exploring the human dimension of digital risk.
No responses yet