Healthcare marketing was built for a world with one privacy law. That world no longer exists.
Shifts in privacy rules have driven innovation in how audiences are built. The result is new methods that deliver greater precision, especially for complex therapies.
For years, precision in healthcare marketing meant predicting someone’s health. Predictive models became the default because they were HIPAA compliant and there were no other privacy rules. As demand for audience quality grew, some vendors pushed accuracy further and adopted aggressive, risky practices. But the fundamental issue is that the laws changed, making the old, inference based approach obsolete.
New state privacy laws now regulate any information that suggests a health condition, even if it is only a prediction. These laws treat inferences about health as sensitive. As a result, older modeling approaches carry new compliance risks, and the industry is shifting toward evidence based strategies built on group level insights rather than individual data. These methods perform better for complex therapies, where small populations and hard to identify patients demand true precision.
This shift is happening for three reasons.
For a long time, HIPAA was the main privacy framework. But HIPAA does not govern advertising. It applies only to providers, payers, and a narrow group of entities. Predictive models were acceptable because they did not use identifiable clinical data and therefore fit within the HIPAA rules. Instead, they relied on consumer data and behavioral signals to infer who might have a condition.
Modern state privacy laws take a different view. They focus on what a company does with data, not where the data came from. In fact, under some state laws, almost any data can be considered health data.
As a result, any signal that points to a health condition is treated as sensitive. It is not limited to medical records. It includes predictions, behavioral clues, and digital patterns that might suggest a diagnosis. These laws also focus on providing consumers with rights, forcing transparency and simplicity, minimizing data, and staying within the bounds of what a consumer would reasonably expect a company to do with their data.
Washington’s My Health My Data Act (MHMD) is the strongest example. It covers any information that can be used to identify or infer something about a person’s physical or mental health. The simple act of placing someone into a segment that implies a condition is enough to trigger heightened obligations under the law, even without certainty.
Several other states have adopted these frameworks, including Nevada, Colorado, and Connecticut. In total, twenty two states now have privacy laws that impact advertising in healthcare. Together, these laws create a consistent message. A prediction about health is treated with the same sensitivity as confirmed health information.
Many states are taking a similarly strict view of health data, especially when it comes to inferences. Many existing laws already treat inferred health information as sensitive, and several states are signaling that future legislation will go even further.
Looking ahead in 2026, the direction is clear, with states like Vermont and Massachusetts signaling even stricter approaches. These bills place tighter limits on data use and sharply constrain inference based targeting. Further, some of these states draw heavily from Maryland’s data minimization approach, limit or prohibit the sale of sensitive data, and in some cases introduce a private right of action.
The message is consistent: predicting or inferring health information is increasingly viewed as out of bounds. This is a major shift in the legal environment. It changes the risk profile of inference based methods.
Even before these laws were passed, inference carried an uncomfortable truth. Brands have seen the consequences when consumers ask why they were targeted with a health related ad, or when regulators scrutinize how a company knew or assumed something so personal. This concern is no longer theoretical. California’s enforcement action against Healthline is a clear example. The Attorney General alleged that Healthline shared inferred health interests with platforms like Meta for advertising purposes, triggering significant penalties.
Predicting sensitive details about a person’s health feels intrusive. Most people do not expect a company to guess whether they might have a disease. They do not expect that guess to be shared across platforms or used to decide what ads they see. This is especially true for sensitive conditions such as sexual health, women’s health, mental health, and rare disease.
This discomfort helps explain where lawmakers are focusing their attention. Health information is deeply personal, and modern privacy laws increasingly rely on concepts like reasonable expectations and purpose limitation. Inferences matter because they often repurpose data collected for one reason into conclusions about health for advertising, a use that frequently falls outside what a consumer would reasonably expect. That gap between intent and use is exactly what these laws are designed to address.
These concerns go beyond compliance. They shape how the ecosystem treats personal information and how much trust patients and consumers place in it.
The shifting landscape has led to innovation. Evidence based approaches are defined by one core principle: measure real world patterns of groups without trying to learn or predict anything about an individual.
This approach aligns cleanly with state privacy laws because it avoids the creation of sensitive health inferences. There is no individual level prediction. There is no profile of who might have a condition. There is only an aggregated view of groups of people.
Evidence based methods are also easier for compliance teams to vet. The rules are simple and clear. The logic can be reviewed quickly. Buyers, agencies, and publishers can understand exactly what they are activating.
This approach becomes even more powerful when finding patients for complex therapies. Small populations and fast changing diagnosis patterns demand methods that reflect the real world without relying on consumer signals. Predictive models struggle because the populations are small and the signals are inconsistent. Newly diagnosed patients are often missed because consumer profiles take time to update.
Evidence based audiences solve these problems by tracking real cohort patterns and adjusting as populations change. This makes them more stable and more precise for complex therapies where every patient matters.
The shift from inference to evidence reflects changing legal and cultural expectations around health privacy. State laws have made it clear that predicting a health condition is treated as sensitive. At the same time, consumers expect more control and less profiling. The industry needs methods that respect these boundaries while still delivering precision.
Evidence based audiences meet that need by delivering greater precision for complex therapies without relying on inference. They reduce risk. They increase trust. And they perform as well, and in many cases better, than legacy methods. This benefits the entire ecosystem by giving brands confidence, agencies clarity, publishers safer data practices, and patients greater respect for how their information is used.
As healthcare advertising moves away from a world built on inference, the most successful strategies will be the ones that are simple, transparent, and aligned with both legal requirements and patient expectations.
This piece was written by Jeremy Mittler, Co- founder and CEO of Blueprint audiences