The next thing to think about, then, is fetching the file.
The easiest way, would be to have my laptop run a cron - it already has access to Nextcloud (because I run the Nextcloud Desktop Client).
But, I don't really want this tied to my laptop - I'd rather it ran as a headless process on something I don't periodically unplug and walk off with.
Ideally, I'd like to containerise it - that'll then give me a choice between running it on one of my docker hosts or running it as a kubernetes job.
Running as an ephemeral container, though, rules out using the desktop sync client - we'll need to fetch the database.
The other thing to think about, is whether we want to track some sort of state - if the database hasn't changed since the last invocation, do we actually want to do anything? That can probably be added later though.
I had a quick play around with Nextcloud's public sharing functionality. Technically we could use that instead of WebDAV but I don't really like the idea of it being publicly available (you can add a password to it, but that auth flow isn't well disposed to automation).
I obviously don't really want a config file sitting around with my nextcloud creds in it.
So, instead, I've created a new nextcloud user (service_user) and made sure it's configured to accept shares automatically
I've then shared the GadgetBridge directory with service_user
(The process was a little unintuitive IMO - after right clicking on the dir and saying Share, you need to type the user's name in that unobtrusive text box - I hadn't even really noticed it).
I've allowed service_user write permissions on the directory (because that means we can use it to store state later).
We're about ready to start thinking about queries.
Can't say for sure what'll be in the database until my watch turns up and is hooked up, however the Gadgetbridge Wiki has a bunch of example queries which can be probably be used as a guide in the meantime (although they mostly relate to pebbles).
Most should be relatively straightforward to collect then. PAI apparently stands for "Personal Activity Intelligence" and is a normalised indicator of activity levels - the idea being to keep it at at least 100 across 7 days.
There are some non-Huami specific tables in there that'll probably be of interest too
So, assuming that the Bip tracks all of them (it should), we should be able to write queries to extract
Current battery level (allowing alerting if it's getting low)
Step count (might only be when the watch detects extended activity though, we'll see)
Activity intensity
Sleep state
Heart Rate
Incremental PAI scores
Sleep respiratory rate
SpO2 level (oxygen blood saturation)
Stress level
Heart rate might be a bit of an odd one, as the current heart rate could be in either of HUAMI_EXTENDED_ACTIVITY_SAMPLE or HUAMI_HEART_RATE_MANUAL_SAMPLE). Probably best to write them into seperate fields (or perhaps add a tag - is_activity - to differentiate between them).
These all live in different tables but otherwise share a schema, so we iterate
through the types generating exactly the same schema of result, differentiated only
by the value of tag sample_type which will be one of
manual
max
resting
The manual samples are taken at the interval set on the watch for non-activity
HR measurements. If activity is detected measurements will be written into a different
table (which we'll add support for shortly)
We now have queries to extract the data - they'll may need tweaking once we have some data to go on, but they're only extracting the raw data as it's stored in the database so they shouldn't need much.
Can't really calculate any aggregates (such as cumulative daily steps) until we know what the data looks like.
So, it's time to move onto having this output to Influx.
Take the result set and turn them into InfluxDB points, writing
out to InfluxDB was we go.
The client is in batching mode, so each write is into the client's
buffer, flushed whenever the batch size (or flush interval) is hit
with one final flush before function exit.
InfluxDB config is passed through a set of environment variables
There's obviously nothing to output yet, but the script now attempts to turn results into points and write out to Influx. The default measurement name (controlled by env var INFLUXDB_MEASUREMENT) is gadgetbridge.
With the script, theoretically, complete the next thing to look at is containerising so that we can start to think about how best to schedule it.
OK, so now that it's containerised, need to think about options around scheduling.
We could
Just run it on a docker host via cron
Set it up as a Kubernetes job
Turn it into a prefect flow and deploy that way
Although I like the idea of number 3, I think I'd rather wait until I've had eyes on the data (because, ideally, we'll take advantage of Prefect's events support to trigger notifications).
For now, I think we'll create a Kubernetes cronjob.
Should run at 15 and 45 mins past. I've actually just adjusted it to run on the hour too so that I don't have to wait too long to be able to check the job's logs
There's been some movement, Gadgetbridge have been able to add support for the Bip3. Direct support is, currently, pending release, but I've been able to add my watch as a GTS-2 in the meantime (thanks to @andy for flagging it to me).
Re-opening so that I can pick this back up (though, if there are any substantial changes needed, they should probably be filed in the dedicated project)
The next step, then, is getting auto-DB exports enabled again.
Hit the Menu
Choose Settings
Scroll down to the Auto export location
Tap Export Location
Browse into a Nextcloud location
Provide gadgetbridge as the output file name
Back in the menu, tap Auto export enabled
Set the export interval to 1 hour
Whilst in the settings menu, it's also worth enabling Auto fetch activity data so that Gadgetbridge attempts to connect to the watch whenever your phone is unlocked (which, ideally, will mean data is synced sooner)
INSERT INTO MI_BAND_ACTIVITY_SAMPLE VALUES(1692871268,1,1,-1,0,1,91);
INSERT INTO MI_BAND_ACTIVITY_SAMPLE VALUES(1692871270,1,1,-1,0,1,90);
INSERT INTO MI_BAND_ACTIVITY_SAMPLE VALUES(1692871271,1,1,-1,0,1,90);
INSERT INTO MI_BAND_ACTIVITY_SAMPLE VALUES(1692871272,1,1,-1,0,1,89);
So, we need to fetch some data from there
sqlite> .schema MI_BAND_ACTIVITY_SAMPLE
CREATE TABLE IF NOT EXISTS "MI_BAND_ACTIVITY_SAMPLE"
("TIMESTAMP" INTEGER NOT NULL ,
"DEVICE_ID" INTEGER NOT NULL ,
"USER_ID" INTEGER NOT NULL ,
"RAW_INTENSITY" INTEGER NOT NULL ,
"STEPS" INTEGER NOT NULL ,
"RAW_KIND" INTEGER NOT NULL ,
"HEART_RATE" INTEGER NOT NULL ,
PRIMARY KEY ("TIMESTAMP" ,"DEVICE_ID" ) ON CONFLICT REPLACE
) WITHOUT ROWID;
Whilst we're here, the other tables containing data are
That's done the trick - I can now retrieve HR data with
SELECT "heart_rate" FROM "testing_db"."autogen"."gadgetbridge" WHERE time > :dashboardTime: AND time < :upperDashboardTime: AND "sample_type"='periodic_samples'
I've updated the docker image to use the latest version of the script, but I'm going to have my cronjob pull from my local registry rather than docker hub - I've noticed a few crons failing recently, I guess Docker have been tightening their rate-limits further as part of their crusade against having users.
Although data's coming over, something's still not quite right.
Gadgetbridge doesn't seem to be getting step data at all - it's claiming 0 steps today, whilst the watch is reporting around 1600.
The band doesn't seem to be taking a HR measurement at the configured interval - I set to to 10 minutes, but the points that came through in the database are an hour apart (looks like it takes multiple reads each time)
sqlite> select * FROM MI_BAND_ACTIVITY_SAMPLE ORDER BY TIMESTAMP DESC LIMIT 10;
TIMESTAMP|DEVICE_ID|USER_ID|RAW_INTENSITY|STEPS|RAW_KIND|HEART_RATE
1692877654|1|1|-1|0|1|-1
1692877653|1|1|-1|0|1|-1
1692877652|1|1|-1|0|1|-1
1692877651|1|1|-1|0|1|-1
1692877650|1|1|-1|0|1|76
1692872425|1|1|-1|0|1|100
1692872423|1|1|-1|0|1|100
1692872422|1|1|-1|0|1|100
1692872421|1|1|-1|0|1|100
1692872420|1|1|-1|0|1|100
I think... rather than chasing this down, it'd be wise to switch to the nightly build of Gadgetbridge and see how the "official" support behaves - it may well already have been addressed (plus, we'll get sleep, stress and PAI support)
Have switched over - now have the option to collect Stress information, and PAI is showing up in the graphs.
Steps are still 0 though
sqlite> select * FROM MI_BAND_ACTIVITY_SAMPLE ORDER BY TIMESTAMP DESC LIMIT 10;
1692881653|1|1|-1|0|1|-1
1692881652|1|1|-1|0|1|-1
1692881651|1|1|-1|0|1|-1
1692881650|1|1|-1|0|1|-1
1692881649|1|1|-1|0|1|85
1692881648|1|1|-1|0|1|-1
1692881647|1|1|-1|0|1|82
1692881645|1|1|-1|0|1|82
1692881644|1|1|-1|0|1|82
1692881643|1|1|-1|0|1|82
Dumping to text doesn't show them appearing anywhere else. I might turn on dev logs in Gadgetbridge and then trigger a sync to see whether it's receiving and objecting to them
The logs show it fetching Stress and PAI data
14:01:39.283 [Binder:22963_1] DEBUG n.f.g.s.d.h.o.AbstractFetchOperation - Performing next operation fetching manual stress data
14:01:39.554 [Binder:22963_1] DEBUG n.f.g.s.d.h.o.AbstractFetchOperation - Performing next operation fetching pai data
But there's no mention of steps.
The upstream issue definitely mentions steps being collected
So far, I could synchronize the sleeping data, the steps data, vibrate the watch and check the beat rate
I'm running the same firmware version (v2.3.6.05) as the issue opener, so I'm guessing GB might only support retrieving steps that are recorded against an activity.
I'll pivot back to using Zepp for the time being, but should probably dig a bit deeper into this as it's so close to what I want.
For whatever reason, Gadgetbridge wasn't fully communicating with the watch. I unpaired it from the phone, deleted from GB and then re-added. Everything started working.
Activity
31-Jul-23 09:29
assigned to @btasker
31-Jul-23 09:29
moved from project-management-only/staging#6
31-Jul-23 09:29
assigned to @btasker
31-Jul-23 09:30
I won't know exactly what data is available for extraction until the Bip arrives (a little later today, hopefully).
But, in the meantime, I can be looking at the DB export retrieval stuff.
31-Jul-23 09:40
Getting the export written into Nextcloud is actually quite easy.
In Gadgetbridge's settings is an
Auto export
section:Tapping the location option lets you set the location the export should be saved to. Helpfully, the Nextcloud app is listed in the location options
You have to provide a filename for the export - I had hoped they might be time indexed, but it seems not.
With that set, my phone has started exporting a (currently empty) database into Nextcloud once an hour.
31-Jul-23 09:46
The next thing to think about, then, is fetching the file.
The easiest way, would be to have my laptop run a cron - it already has access to Nextcloud (because I run the Nextcloud Desktop Client).
But, I don't really want this tied to my laptop - I'd rather it ran as a headless process on something I don't periodically unplug and walk off with.
Ideally, I'd like to containerise it - that'll then give me a choice between running it on one of my docker hosts or running it as a kubernetes job.
Running as an ephemeral container, though, rules out using the desktop sync client - we'll need to fetch the database.
The other thing to think about, is whether we want to track some sort of state - if the database hasn't changed since the last invocation, do we actually want to do anything? That can probably be added later though.
31-Jul-23 09:48
Nextcloud makes files available via WebDAV so it should be relatively easy to fetch a specific path.
31-Jul-23 10:15
I had a quick play around with Nextcloud's public sharing functionality. Technically we could use that instead of WebDAV but I don't really like the idea of it being publicly available (you can add a password to it, but that auth flow isn't well disposed to automation).
I obviously don't really want a config file sitting around with my nextcloud creds in it.
So, instead, I've created a new nextcloud user (
service_user
) and made sure it's configured to accept shares automaticallyI've then shared the
GadgetBridge
directory withservice_user
(The process was a little unintuitive IMO - after right clicking on the dir and saying Share, you need to type the user's name in that unobtrusive text box - I hadn't even really noticed it).
I've allowed
service_user
write permissions on the directory (because that means we can use it to store state later).31-Jul-23 10:50
mentioned in commit utilities/gadgetbridge_to_influxdb@fcc931a8d74c1239de1b94f8edafbe734bc6c68a
Message
Fetch the file from WebDAV source (project-management-only/staging#6)
Currently the script will
WEBDAV_PATH
andEXPORT_FILENAME
) existsThe next step will be to have SQLite open a handle on it ready for querying information out
31-Jul-23 10:53
mentioned in commit utilities/gadgetbridge_to_influxdb@bfa596b45f0b4dcce5fe9f30e11430bf1c3282e3
Message
Open the downloaded file using sqlite3 (project-management-only/staging#6)
31-Jul-23 10:55
We're about ready to start thinking about queries.
Can't say for sure what'll be in the database until my watch turns up and is hooked up, however the Gadgetbridge Wiki has a bunch of example queries which can be probably be used as a guide in the meantime (although they mostly relate to pebbles).
31-Jul-23 11:08
It looks like, under the hood, the Bip is a Huami device (or at least compatible with them)
Within the database, there are a bunch of HUAMI dedicated tables
So we almost certainly want to look at those first.
Most should be relatively straightforward to collect then. PAI apparently stands for "Personal Activity Intelligence" and is a normalised indicator of activity levels - the idea being to keep it at at least 100 across 7 days.
There are some non-Huami specific tables in there that'll probably be of interest too
31-Jul-23 11:15
So, assuming that the Bip tracks all of them (it should), we should be able to write queries to extract
Heart rate might be a bit of an odd one, as the current heart rate could be in either of
HUAMI_EXTENDED_ACTIVITY_SAMPLE
orHUAMI_HEART_RATE_MANUAL_SAMPLE
). Probably best to write them into seperate fields (or perhaps add a tag -is_activity
- to differentiate between them).31-Jul-23 12:22
Starting easy and writing a query for the SpO2 info. The results are queried and then a dict is constructed from the result delinating fields and tags
All subsequent queries will use the same structure - it should make generating LP quite simple.
31-Jul-23 12:25
mentioned in commit utilities/gadgetbridge_to_influxdb@8b2021b0fa9b60eb50fcd38f2e4396f82587d03e
Message
Query SpO2 information from the database and reformat into a normalised dict structure project-management-only/staging#6
31-Jul-23 12:25
mentioned in commit utilities/gadgetbridge_to_influxdb@b666db0c3e6b5e64c0dc7ad4cf57a633a1b65194
Message
Capture stress level project-management-only/staging#6
31-Jul-23 12:40
mentioned in commit utilities/gadgetbridge_to_influxdb@539dd54d0b5dac2578bb106e1c67d6dc8255edd8
Message
Collect respiratory rate, PAI and battery level (project-management-only/staging#6)
31-Jul-23 12:45
mentioned in commit utilities/gadgetbridge_to_influxdb@cb60925433f542e79f3e48386420aa107ba47704
Message
Collect manual, max and resting heart rate samples project-management-only/staging#6
These all live in different tables but otherwise share a schema, so we iterate through the types generating exactly the same schema of result, differentiated only by the value of tag
sample_type
which will be one ofmanual
max
resting
The
manual
samples are taken at the interval set on the watch for non-activity HR measurements. If activity is detected measurements will be written into a different table (which we'll add support for shortly)31-Jul-23 12:53
mentioned in commit utilities/gadgetbridge_to_influxdb@4074851b6c14f7abef95edba39b0d165f5ef6bb9
Message
Collect activity values project-management-only/staging#6
This collects data from
HUAMI_EXTENDED_ACTIVITY_SAMPLE
and exposes the following fieldsThe value of tag
sample_type
will always beactivity
on these points31-Jul-23 12:56
We now have queries to extract the data - they'll may need tweaking once we have some data to go on, but they're only extracting the raw data as it's stored in the database so they shouldn't need much.
Can't really calculate any aggregates (such as cumulative daily steps) until we know what the data looks like.
So, it's time to move onto having this output to Influx.
31-Jul-23 13:09
mentioned in commit utilities/gadgetbridge_to_influxdb@ae47313ebd704c1247578f2968c5c7010a8248fc
Message
Write data out to InfluxDB (project-management-only/staging#6)
Take the result set and turn them into InfluxDB points, writing out to InfluxDB was we go.
The client is in batching mode, so each write is into the client's buffer, flushed whenever the batch size (or flush interval) is hit with one final flush before function exit.
InfluxDB config is passed through a set of environment variables
31-Jul-23 13:17
There's obviously nothing to output yet, but the script now attempts to turn results into points and write out to Influx. The default measurement name (controlled by env var
INFLUXDB_MEASUREMENT
) isgadgetbridge
.With the script, theoretically, complete the next thing to look at is containerising so that we can start to think about how best to schedule it.
31-Jul-23 13:23
mentioned in commit utilities/gadgetbridge_to_influxdb@05dc1fe23fbf9798bd415c0f2378988ff9356f7d
Message
Add simple dockerfile project-management-only/staging#6
31-Jul-23 13:57
OK, so now that it's containerised, need to think about options around scheduling.
We could
Although I like the idea of number 3, I think I'd rather wait until I've had eyes on the data (because, ideally, we'll take advantage of Prefect's events support to trigger notifications).
For now, I think we'll create a Kubernetes cronjob.
Storing the credentials as secrets
Then defining a cronjob, referencing the secrets so they're available in the environment
Applied with
Should run at 15 and 45 mins past. I've actually just adjusted it to run on the hour too so that I don't have to wait too long to be able to check the job's logs
31-Jul-23 14:37
Fuck sake...
After all that, the watch has arrived and proven not to be supported by Gadgetbridge.
Reading between the lines it looks like the manufacturer may have changed the underlying firmware.
Gadgetbridge shows it as unsupported when scanning.
Well, that's just balls isn't it.
31-Jul-23 14:54
mentioned in issue project-management-only/staging#7
31-Jul-23 16:55
mentioned in issue #35
24-Aug-23 10:16
Good news!
There's been some movement, Gadgetbridge have been able to add support for the Bip3. Direct support is, currently, pending release, but I've been able to add my watch as a GTS-2 in the meantime (thanks to @andy for flagging it to me).
Re-opening so that I can pick this back up (though, if there are any substantial changes needed, they should probably be filed in the dedicated project)
24-Aug-23 10:24
OK, so currently, I have the device connected and measuring HR etc, but haven't done much more. Here's how I got to where I currently am:
Ensure that Zepp is stopped (ideally, uninstalled)
Getting the auth code (must already have connected to the Zepp app at least once)
This will login and print the auth code and MAC address. Keep a note of both
Adding the device:
Amazfit GTS 2
The device will be added, but not connected.
Hit it's settings cog, then
Auth Key
huami_token
Connect
Gadgetbridge should now successfully connect to the Bip.
Hit the settings cog again, and
Heart Rate Monitoring
Use heart rate sensor to improve sleep detection
Whole day HR measurement
and choose1 hour
(the watch will sample more regularly if you're active)24-Aug-23 10:29
The next step, then, is getting auto-DB exports enabled again.
Auto export
locationgadgetbridge
as the output file nameAuto export enabled
Whilst in the settings menu, it's also worth enabling
Auto fetch activity data
so that Gadgetbridge attempts to connect to the watch whenever your phone is unlocked (which, ideally, will mean data is synced sooner)24-Aug-23 10:31
Then, as described above I've shared the relevant Nextcloud directory with a service user.
24-Aug-23 10:55
With a few minor tweaks/fixes the script works - at least in so much as it's now writing battery levels into InfluxDB.
Heart-rate measurements though, are notably absent.
The script attempts to pull HR data from
But these are all empty. Gadgetbridge is showing HR data on it's graph though, so it's being recorded somewhere.
To try and find out where, I've dumped the Sqlite DB to text:
It contains lots of these
So, we need to fetch some data from there
Whilst we're here, the other tables containing data are
24-Aug-23 11:04
mentioned in commit utilities/gadgetbridge_to_influxdb@eb87654daf52bf078bb496c404b8b79a86676989
Message
Collect periodic heart-rate and steps samples (identified in jira-projects/MISC#34)
24-Aug-23 11:05
That's done the trick - I can now retrieve HR data with
24-Aug-23 11:26
So, the next thing to do is to set up scheduled runs of the container. As before, I'm going to do this with a Kube cron job.
The secrets already exist from the previous setup.
I've updated the docker image to use the latest version of the script, but I'm going to have my cronjob pull from my local registry rather than docker hub - I've noticed a few crons failing recently, I guess Docker have been tightening their rate-limits further as part of their crusade against having users.
Applying
Rather than waiting ~15 mins for the next run, have created a job from the cronjob
No exceptions is a good sign
24-Aug-23 11:45
I've created a task in Tasker which sends intent
nodomain.freeyourgadget.gadgetbridge.command.TRIGGER_EXPORT
:In Gadgetbridge:
Intent API
Allow database export
I can now use the Tasker task to manually trigger an export of the database.
I've triggered the Kube job again - we should get up to date HR data in the DB
24-Aug-23 12:38
Although data's coming over, something's still not quite right.
Gadgetbridge doesn't seem to be getting step data at all - it's claiming 0 steps today, whilst the watch is reporting around 1600.
The band doesn't seem to be taking a HR measurement at the configured interval - I set to to 10 minutes, but the points that came through in the database are an hour apart (looks like it takes multiple reads each time)
24-Aug-23 12:43
I think... rather than chasing this down, it'd be wise to switch to the nightly build of Gadgetbridge and see how the "official" support behaves - it may well already have been addressed (plus, we'll get sleep, stress and PAI support)
24-Aug-23 13:11
Details on switching to Nightly are here
Have switched over - now have the option to collect Stress information, and PAI is showing up in the graphs.
Steps are still 0 though
Dumping to text doesn't show them appearing anywhere else. I might turn on dev logs in Gadgetbridge and then trigger a sync to see whether it's receiving and objecting to them
The logs show it fetching Stress and PAI data
But there's no mention of steps.
The upstream issue definitely mentions steps being collected
I'm running the same firmware version (
v2.3.6.05
) as the issue opener, so I'm guessing GB might only support retrieving steps that are recorded against an activity.I'll pivot back to using Zepp for the time being, but should probably dig a bit deeper into this as it's so close to what I want.
24-Aug-23 16:46
I commented on the upstream ticket to note I'd seen some issues - have tried what they suggested, but no dice so far.
I'm probably not going to have time to play with this for a bit, so switching back to Zepp for now.
25-Aug-23 07:53
mentioned in commit utilities/gadgetbridge_to_influxdb@97410101f2b568635e8b6263a0112fed0a1bb66b
Message
Fix: Huami samples seem to use millisecond epochs
This adjusts timestamp calculation for the relevant tables.
We could probably autodetect this just before the write - if a timestamp is significantly greater than allowed, we could divide it by 1000
cc jira-projects/MISC#34
25-Aug-23 16:52
We're up and running!
For whatever reason, Gadgetbridge wasn't fully communicating with the watch. I unpaired it from the phone, deleted from GB and then re-added. Everything started working.
Details in upstream ticket
I've got step data coming into the DB
Interestingly, it's point-in-time rather than cumulative.
25-Aug-23 17:02
Heart rate looks a bit questionable
But, I've seen elsewhere that the watch uses 255 and 254 for missed/failed reads. Need to filter those out (either in queries, or in the script)
Raised utilities/gadgetbridge_to_influxdb#1 to follow up on this
26-Aug-23 09:59
I'm going to close this off now and move to tracking under the code's project.
I've marked it as public, so it should soon appear at https://projects.bentasker.co.uk/gils_projects/project/utilities/gadgetbridge_to_influxdb.html
26-Aug-23 16:06
mentioned in commit sysconfigs/bumblebee-kubernetes-charts@c8eeb8b5925b2a7d4f824fd587db2d4b5b402110
Message
Set Gadgetbridge cron live jira-projects/MISC#34
This moves the container from writing into the testing db to writing into the live DB.
28-Aug-23 16:14
mentioned in issue utilities/gadgetbridge_to_influxdb#15