How can we help?


Live context refers to what the user is doing at a given point in time, in the physical world, while using an app (e.g. commuting, relaxing, running).

We combine mobile sensor data (e.g. accelerometer, gyroscope, location) with our patented on-device AI to create live contexts, without requiring any knowledge about users’ identity.

We currently predict over 200 live contexts.

ID-less audiences refers to cohorts of mobile app users (e.g. joggers, shoppers) who demonstrate certain behavioural attributes and can be targeted by advertisers without the need for mobile advertising identifiers (IDFA/AAID).

As a user uses an app over a period of time, we learn about their habits based on what they do in the physical world (e.g. running 3x a week, driving every day) using any of our over 200 live contexts. These repeat behaviours give us an indication of which audiences the user might be categorised into (e.g. joggers, vehicle users). As all of this happens on the device itself, there is no need to send this data to the cloud, eliminating the need for any identifiers.

Traditionally, audiences have been built using data from various sources (e.g. mobile apps, IP address, web browsing) and stored in cloud databases. A unique identifier is then used to combine these different data points to determine which audiences a user might belong to, such as: EA7583CD-A667-48BC-B806-42ECB2B48606 → Auto Intender, Car Enthusiast.

With on-device audiences, there is no centralized data repository because audiences are generated and stored within the app itself. There is no need for a uniquely identifiable identifier to determine a user’s audience memberships, since all behavioural data already resides within the app, such as: Music App: Joggers, Walkers, Early Risers.

These on-device audiences can be activated by advertisers to reach relevant audiences without requiring any app-level identifiers for targeting, thereby preserving the user’s identity.

Yes. We work closely with our demand partners and customers and leverage their experience to bring new audiences and live context types to market.

If the app runs in the background (or has a background service), then we will run as well. Audio apps often run in the background, because users typically listen to music while continuing to use other apps. If the app only runs in the foreground i.e. when the app is open, we will only run in the foreground.

A user can be a member of multiple audiences at the same time. For example, if they always use the app when they go for a walk or when they go for a run, they’ll be members of the Walkers and Joggers audiences.

Advertisers can personalise ads in these three ways:

  • Based on the audiences a user belongs to (e.g. walkers, joggers)
  • Based on a user’s live context (e.g. walking, post-exercise)
  • Based on a combination of a user’s live context and the audiences they belong to (e.g. joggers, post-exercise)

You can see a demo here, or book a live one with our sales team here.

Mobile, in-app.

iOS, Android, Unity, React Native


We integrate our SDK either directly with an app or within another SDK that already sits inside an app.

Through our supply partners (SSPs, DSPs) and publisher partners. Think of us as an on-device data provider that unlocks the value of 1st party data.

No. We analyse the behaviour and environment of users, using mobile device sensors. We have no access to demographic information.

A number of our ID-less audiences are mapped to IAB’s Audience Taxonomy 2.0.

We are in the process of standardising the live context taxonomy with our industry partners such as IAB Tech Lab and Prebid.

The audience scale depends on the number of users of the mobile app and how they use the app in different moments throughout the day. Live context signals will operate at 100% scale from day 1 of the SDK going live whereas ID-less audiences will begin to start showing up after a few days of app usage.

If you’re an existing NumberEight partner, you can request the demo app here. If you’re new to NumberEight, book a demo with our team.

Please contact sales and we’ll set you up with the documentation.

Yes, you can invite as many as you want. Contact support to get them set up.


On-device (or edge) computing refers to bringing the computation of data and its storage closer to where the data is being generated, or to the device itself. In the case of smartphones, instead of sending raw sensor data to the cloud, data processing and predictive computations happen entirely on the device. This means a faster response time, reduced bandwidth usage and a minimisation of data leakage risk.

The impact is negligible. We work hard to ensure 24h battery life when an app uses our software continuously.

iOS: ~2-2.5MB (depending on architecture)
Android: 9.5MB (installed size approx 2.0MB)


We work with digital audio, mobile games, mobile commerce and mobile SSPs.

No, we are a mobile data platfrom.

No, we are a mobile data platform.

We are not geo-limited. Our current customer base is centered around the US, UK, EU and Australia.

Yes. We can offer a 2-3 month pilot period to our partners at a discounted rate.

Any app or SDK owner can integrate our software for free (up to 10,000 DAU) using the Community Software License and get access to our customer dashboard to get further insights about their users, audience memberships, and other analytics.


Information about the types of data we collect and process can be found in our privacy policy.

Whilst there is no single accuracy number, as we predict over 200 live contexts across various categories, all of our predictions are probabilistic with high degrees of accuracy.

We use a combination of proprietary datasets (collected in collaboration with device manufacturers and academic institutions), public datasets, simulated mobile sensor data, data collected from test devices, and live mobile sensor data from smartphones.

Audience data is currently refreshed every 24 hours.

Our mobile SDK collects mobile sensor data on the device, and processes this data into live context predictions and ID-less audiences (still, on the device).

We store our data on Google’s servers located in London, UK.

Yes. Such personalisation is based on users’ behaviour and environmental context.

No. All raw data is only processed on the device and is never sent out.


We do our utmost to ensure user data is respected and never mishandled. A few things that we do to deliver privacy-by-design products:

  • Doing as much processing on the device and not sending it to the cloud, significantly minimising chances of any data leakage or misuse
  • Not sending any raw sensor data off the device to prevent fingerprinting
  • Only sending aggregated insights off the device (e.g. walking, joggers)
  • Not associating user data with a uniquely identifier identifier
  • Not storing data for longer than we need to deliver value

Our products aren’t impacted by the loss of identifiers, because our live contexts or audience membership predictions don’t rely on any app- or device-level identifiers.

No foreseeable impact.

We operate on the basis of either consent or legitimate interest (see our impact assessment for more details), and keep data safe by storing as little as possible and pseudonymising any data sent to our servers.



No. Since we don’t send any mobile identifiers, there is no way to target an individual user. Thus, retargeting users isn’t possible.

We do not allow Clients to provide us with any information about minors under 13 years old. If we learn that personal information from users less than 13 years of age has been collected, we will take reasonable measures to promptly delete such data from our records.

Yes. Our TCF Vendor ID is 882.


For payment-related queries, please contact us.

For payment-related queries, please contact us.