Skip to main content

Events vs Attributes

Aampe’s agents treat every message as an experiment, building causal understanding from events over time rather than statistical guesses from static attributes.

Observation vs. Intervention

Typical ML systems learn by observing correlations in attributes, past behaviors, and demographics. The problem is that none of this tells you what caused anything. It’s information without a decision advantage. Aampe’s agents aren’t guessing what’s likely—they’re changing what happens. Every agent intervenes in a user’s behavioral stream, then measures how that intervention alters what follows. That change is what the agent learns from. Attributes like age or device type don’t change fast or cleanly enough to provide that signal. Events are timestamped and tied to specific moments before and after an agent acts. That temporal structure makes them the only reliable substrate for causal learning.

The Logic of Events

An event stream tells a story over time. When an agent intervenes, it inserts a new chapter. The question isn’t “Did something happen after I acted?” The question is “Did what happened differ from what would have happened anyway?” Aampe agents use difference-in-differences reasoning. Each maintains a baseline of normal behavior for its user. When the agent acts, it measures deviation from that baseline. If the deviation exceeds normal fluctuation, the agent infers its action likely caused the change. This structure—observing before, intervening, measuring after—separates coincidence from consequence.

Why Attributes Fall Short

Predictive models built on attributes operate entirely in correlation. They can tell you that people with certain traits are more likely to convert, but not whether your message caused that response. Knowing someone probably will convert doesn’t tell you what to do. Attributes also create structural confounds. Geography might correlate with conversion while the true driver is pricing. Device type might stand in for connection speed. Many attributes are proxies for unmeasured factors, and their meaning shifts over time. Without interventions and time-based events, attribute-driven models can predict probabilities but can’t guide decisions.

Many “Attributes” Are Really Events

Much of what teams call attributes are compressed event histories. A “new user” flag points to a specific moment—the first app open. A “high-value user” label summarizes recent purchases. Even a geography change is an event. Flattening events into attributes erases the timeline that gave them meaning. Events keep temporal structure intact, letting agents measure cause and effect rather than just noting that a change exists.

Attributes as Boundaries, Not Training Data

Attributes still matter—as constraints, not learning inputs. If content exists only in Spanish, there’s no point testing whether English speakers respond. Language preference serves as an eligibility gate before learning begins. The same applies to geography for location-dependent offers, device type for message formats, and subscription tier for content gating. Attributes narrow the action space to what’s possible. Business logic lives in attributes; behavioral learning lives in the event stream.

Why Attribute-Based Learning Would Fail

Training on attributes would mean freezing users into buckets like “likely buyer” and assuming the same action works for everyone in that bucket. A clothing app targeting all “high income” users with premium products ignores that income isn’t intent. A fitness app pushing yoga to everyone over 40 misses that some run marathons. Attributes enable correlation, not reasoning. Personalization dies when the system stops testing what works for each person.

Intervention Is Ground Truth

Aampe’s architecture assumes intelligence lies in acting and observing what happens, not passively summarizing data. The agent doesn’t need demographics or clusters; it needs to know whether this action moved the needle for this person at this moment. Events make that possible. Attributes describe users; events describe change. Only by measuring change can a system learn to make better choices.