Got some new data you want to send to us? How in the world do you send a new ping? Follow this guide to find out.
Do not try and implement new pings unless you know specifically what questions you're trying to answer. General questions about "How do users use our product?" won't cut it - these need to be specific, concrete asks that can be translated to data points. This will also make it easier down the line as you start data review.
Use JSON Schema to start with. See the examples schemas in the Mozilla Pipeline Schemas repo. This schema is just used to validate the incoming data; any ping that doesn't match the schema will be removed. Validate your JSON Schema using a validation tool.
We already have automatic deduplicating based on
docId, which catches about 90% of duplicates and
removes them from the dataset.
The first schema added should be the JSON Schema made in step 2. Add at least one example ping which the data can be validated against. These test pings will be validated automatically during the build.
a Parquet output
schema should be added. This would add a new dataset, available in Re:dash.
The best documentation we have for the Parquet schema is by looking at the examples in
Parquet output also has a
metadata section. These are fields added to the ping at ingestion time;
they might come from the URL submitted to the edge server, or the IP Address used to make the request.
lists available metadata fields for all pings.
The stream you're interested in is probably
For example, look at
system-addon-deployment-diagnostics immediately under the
schema element has top-level fields (e.g.
Type), as well as more fields
Fields element. Any of these can be used in the
metadata section of your parquet schema,
Some common ones might be:
Important Note: Schema evolution of nested structs is currently broken, so you will not be able to add
any fields in the future to your
metadata section. We recommend adding any that may seem useful.
Note that this only works if data is already being sent, and you want to test the schema you're writing on the data that is currently being ingested.
Test your Parquet output in Hindsight by using
an output plugin. See Core ping output plugin
for an example, where the parquet schema is specified as
parquet_schema. If no errors
arise, that means it should be correct. The "Deploy" button should not be used to actually
deploy, that will be done by operations in the next step.
Real-time analysis will be key to ensuring your data is being processed and parsed correctly.
It should follow the format specified in
This allows you to check validation errors, size changes, duplicates, and more. Once you have
the numbers set, file a
bug to let ops deploy it
Data Platform Operations takes care of this. It will then be available to query more easily using the Dataset API. To do so, make a bug like Bug 1292493.
This will make it available with the Dataset API (used with PySpark on ATMO machines).
There also needs to be a schema for the layout of the Heka files in
sources.json. If you want to do this, talk to
If you're using the Telemetry APIs, use those built-in. These can be with the Gecko Telemetry APIs, the Android Telemetry APIs, or the iOS Telemetry APIs. Otherwise, see here for the endpoint and expected format.
Work is happening to make a generic endpoint. These pings can be easily registered and sent to our servers and will be automatically available in Re:dash. Please check back later for those docs.
You can schedule it on Airflow, or you can run it as a job in ATMO. If the output is parquet, you can add it to the Hive metastore to have it available in re:dash. Check the docs on creating your own datasets.
Last steps! What are you using this data for anyway?