project jira-projects / Miscellaneous avatar

jira-projects/MISC#34: Automate extraction of stepcounts/heartrate from Gadgetbridge



Issue Information

Issue Type: issue
Status: closed
Reported By: btasker
Assigned To: btasker

Created: 31-Jul-23 09:29



Description

I'm looking at moving from my Watchy to sometimes wearing an Amzfit Bip 3 pro.

The Bip is going to be linked to GadgetBridge.

I'm raising this ticket to track the following:

  • Enabling auto-export in Gadgetbridge
  • Writing the export into nextcloud
  • Having a script periodically fetch the export (via Webdav?)
  • Processing the export (it's just a SQLite db) to extract health stats

The collected data should then be written onwards into InfluxDB.



Toggle State Changes

Activity


assigned to @btasker

assigned to @btasker

I won't know exactly what data is available for extraction until the Bip arrives (a little later today, hopefully).

But, in the meantime, I can be looking at the DB export retrieval stuff.

Getting the export written into Nextcloud is actually quite easy.

In Gadgetbridge's settings is an Auto export section:

Screenshot_2023-07-31-10-31-26-678_nodomain.freeyourgadget.gadgetbridge

Tapping the location option lets you set the location the export should be saved to. Helpfully, the Nextcloud app is listed in the location options

signal-2023-07-31-103600_002

You have to provide a filename for the export - I had hoped they might be time indexed, but it seems not.

With that set, my phone has started exporting a (currently empty) database into Nextcloud once an hour.

The next thing to think about, then, is fetching the file.

The easiest way, would be to have my laptop run a cron - it already has access to Nextcloud (because I run the Nextcloud Desktop Client).

But, I don't really want this tied to my laptop - I'd rather it ran as a headless process on something I don't periodically unplug and walk off with.

Ideally, I'd like to containerise it - that'll then give me a choice between running it on one of my docker hosts or running it as a kubernetes job.

Running as an ephemeral container, though, rules out using the desktop sync client - we'll need to fetch the database.

The other thing to think about, is whether we want to track some sort of state - if the database hasn't changed since the last invocation, do we actually want to do anything? That can probably be added later though.

Nextcloud makes files available via WebDAV so it should be relatively easy to fetch a specific path.

I had a quick play around with Nextcloud's public sharing functionality. Technically we could use that instead of WebDAV but I don't really like the idea of it being publicly available (you can add a password to it, but that auth flow isn't well disposed to automation).

I obviously don't really want a config file sitting around with my nextcloud creds in it.

So, instead, I've created a new nextcloud user (service_user) and made sure it's configured to accept shares automatically

Screenshot_20230731_111057

I've then shared the GadgetBridge directory with service_user

Screenshot_20230731_111409

(The process was a little unintuitive IMO - after right clicking on the dir and saying Share, you need to type the user's name in that unobtrusive text box - I hadn't even really noticed it).

I've allowed service_user write permissions on the directory (because that means we can use it to store state later).

verified

mentioned in commit utilities/gadgetbridge_to_influxdb@fcc931a8d74c1239de1b94f8edafbe734bc6c68a

Commit: utilities/gadgetbridge_to_influxdb@fcc931a8d74c1239de1b94f8edafbe734bc6c68a 
Author: B Tasker                            
                            
Date: 2023-07-31T11:48:31.000+01:00 

Message

Fetch the file from WebDAV source (project-management-only/staging#6)

Currently the script will

  • Connect to a WebDAV server
  • Validate that the remote file (a combination of env vars WEBDAV_PATH and EXPORT_FILENAME) exists
  • Create a temporary working directory
  • Download the file into it
  • Tidy up and remove the temporary directory (and downloaded file)

The next step will be to have SQLite open a handle on it ready for querying information out

+93 -0 (93 lines changed)
verified

mentioned in commit utilities/gadgetbridge_to_influxdb@bfa596b45f0b4dcce5fe9f30e11430bf1c3282e3

Commit: utilities/gadgetbridge_to_influxdb@bfa596b45f0b4dcce5fe9f30e11430bf1c3282e3 
Author: B Tasker                            
                            
Date: 2023-07-31T11:53:20.000+01:00 

Message

Open the downloaded file using sqlite3 (project-management-only/staging#6)

+14 -0 (14 lines changed)

We're about ready to start thinking about queries.

Can't say for sure what'll be in the database until my watch turns up and is hooked up, however the Gadgetbridge Wiki has a bunch of example queries which can be probably be used as a guide in the meantime (although they mostly relate to pebbles).

It looks like, under the hood, the Bip is a Huami device (or at least compatible with them)

Within the database, there are a bunch of HUAMI dedicated tables

sqlite> .tables
ACTIVITY_DESCRIPTION                 LEFUN_ACTIVITY_SAMPLE              
ACTIVITY_DESC_TAG_LINK               LEFUN_BIOMETRIC_SAMPLE             
ALARM                                LEFUN_SLEEP_SAMPLE                 
BANGLE_JSACTIVITY_SAMPLE             MAKIBES_HR3_ACTIVITY_SAMPLE        
BASE_ACTIVITY_SUMMARY                MI_BAND_ACTIVITY_SAMPLE            
BATTERY_LEVEL                        NO1_F1_ACTIVITY_SAMPLE             
CALENDAR_SYNC_STATE                  NOTIFICATION_FILTER                
CASIO_GBX100_ACTIVITY_SAMPLE         NOTIFICATION_FILTER_ENTRY          
CONTACT                              PEBBLE_HEALTH_ACTIVITY_OVERLAY     
DEVICE                               PEBBLE_HEALTH_ACTIVITY_SAMPLE      
DEVICE_ATTRIBUTES                    PEBBLE_MISFIT_SAMPLE               
FIT_PRO_ACTIVITY_SAMPLE              PEBBLE_MORPHEUZ_SAMPLE             
HPLUS_HEALTH_ACTIVITY_OVERLAY        PINE_TIME_ACTIVITY_SAMPLE          
HPLUS_HEALTH_ACTIVITY_SAMPLE         REMINDER                           
HUAMI_EXTENDED_ACTIVITY_SAMPLE       SONY_SWR12_SAMPLE                  
HUAMI_HEART_RATE_MANUAL_SAMPLE       TAG                                
HUAMI_HEART_RATE_MAX_SAMPLE          TLW64_ACTIVITY_SAMPLE              
HUAMI_HEART_RATE_RESTING_SAMPLE      USER                               
HUAMI_PAI_SAMPLE                     USER_ATTRIBUTES                    
HUAMI_SLEEP_RESPIRATORY_RATE_SAMPLE  WATCH_XPLUS_ACTIVITY_SAMPLE        
HUAMI_SPO2_SAMPLE                    WATCH_XPLUS_HEALTH_ACTIVITY_OVERLAY
HUAMI_STRESS_SAMPLE                  WORLD_CLOCK                        
HYBRID_HRACTIVITY_SAMPLE             XWATCH_ACTIVITY_SAMPLE             
ID115_ACTIVITY_SAMPLE                ZE_TIME_ACTIVITY_SAMPLE            
JYOU_ACTIVITY_SAMPLE                 android_metadata                   

So we almost certainly want to look at those first.

sqlite> pragma table_info(HUAMI_EXTENDED_ACTIVITY_SAMPLE);
cid|name|type|notnull|dflt_value|pk
0|TIMESTAMP|INTEGER|1||1
1|DEVICE_ID|INTEGER|1||2
2|USER_ID|INTEGER|1||0
3|RAW_INTENSITY|INTEGER|1||0
4|STEPS|INTEGER|1||0
5|RAW_KIND|INTEGER|1||0
6|HEART_RATE|INTEGER|1||0
7|UNKNOWN1|INTEGER|0||0
8|SLEEP|INTEGER|0||0
9|DEEP_SLEEP|INTEGER|0||0
10|REM_SLEEP|INTEGER|0||0

sqlite> pragma table_info(HUAMI_HEART_RATE_MANUAL_SAMPLE);
cid|name|type|notnull|dflt_value|pk
0|TIMESTAMP|INTEGER|1||1
1|DEVICE_ID|INTEGER|1||2
2|USER_ID|INTEGER|1||0
3|UTC_OFFSET|INTEGER|1||0
4|HEART_RATE|INTEGER|1||0

sqlite> pragma table_info(HUAMI_HEART_RATE_MAX_SAMPLE);
cid|name|type|notnull|dflt_value|pk
0|TIMESTAMP|INTEGER|1||1
1|DEVICE_ID|INTEGER|1||2
2|USER_ID|INTEGER|1||0
3|UTC_OFFSET|INTEGER|1||0
4|HEART_RATE|INTEGER|1||0

sqlite> pragma table_info(HUAMI_HEART_RATE_RESTING_SAMPLE);
cid|name|type|notnull|dflt_value|pk
0|TIMESTAMP|INTEGER|1||1
1|DEVICE_ID|INTEGER|1||2
2|USER_ID|INTEGER|1||0
3|UTC_OFFSET|INTEGER|1||0
4|HEART_RATE|INTEGER|1||0


sqlite> pragma table_info(HUAMI_PAI_SAMPLE);
cid|name|type|notnull|dflt_value|pk
0|TIMESTAMP|INTEGER|1||1
1|DEVICE_ID|INTEGER|1||2
2|USER_ID|INTEGER|1||0
3|UTC_OFFSET|INTEGER|1||0
4|PAI_LOW|REAL|1||0
5|PAI_MODERATE|REAL|1||0
6|PAI_HIGH|REAL|1||0
7|TIME_LOW|INTEGER|1||0
8|TIME_MODERATE|INTEGER|1||0
9|TIME_HIGH|INTEGER|1||0
10|PAI_TODAY|REAL|1||0
11|PAI_TOTAL|REAL|1||0

sqlite> pragma table_info(HUAMI_SLEEP_RESPIRATORY_RATE_SAMPLE);
cid|name|type|notnull|dflt_value|pk
0|TIMESTAMP|INTEGER|1||1
1|DEVICE_ID|INTEGER|1||2
2|USER_ID|INTEGER|1||0
3|UTC_OFFSET|INTEGER|1||0
4|RATE|INTEGER|1||0

sqlite> pragma table_info(HUAMI_SPO2_SAMPLE);
cid|name|type|notnull|dflt_value|pk
0|TIMESTAMP|INTEGER|1||1
1|DEVICE_ID|INTEGER|1||2
2|USER_ID|INTEGER|1||0
3|TYPE_NUM|INTEGER|1||0
4|SPO2|INTEGER|1||0

sqlite> pragma table_info(HUAMI_STRESS_SAMPLE);
cid|name|type|notnull|dflt_value|pk
0|TIMESTAMP|INTEGER|1||1
1|DEVICE_ID|INTEGER|1||2
2|USER_ID|INTEGER|1||0
3|TYPE_NUM|INTEGER|1||0
4|STRESS|INTEGER|1||0

Most should be relatively straightforward to collect then. PAI apparently stands for "Personal Activity Intelligence" and is a normalised indicator of activity levels - the idea being to keep it at at least 100 across 7 days.

There are some non-Huami specific tables in there that'll probably be of interest too

sqlite> pragma table_info(BATTERY_LEVEL);
cid|name|type|notnull|dflt_value|pk
0|TIMESTAMP|INTEGER|1||1
1|DEVICE_ID|INTEGER|1||2
2|LEVEL|INTEGER|1||0
3|BATTERY_INDEX|INTEGER|1||3

sqlite> pragma table_info(DEVICE);
cid|name|type|notnull|dflt_value|pk
0|_id|INTEGER|0||1
1|NAME|TEXT|1||0
2|MANUFACTURER|TEXT|1||0
3|IDENTIFIER|TEXT|1||0
4|TYPE|INTEGER|1||0
5|MODEL|TEXT|0||0
6|ALIAS|TEXT|0||0
7|PARENT_FOLDER|TEXT|0||0

So, assuming that the Bip tracks all of them (it should), we should be able to write queries to extract

  • Current battery level (allowing alerting if it's getting low)
  • Step count (might only be when the watch detects extended activity though, we'll see)
  • Activity intensity
  • Sleep state
  • Heart Rate
  • Incremental PAI scores
  • Sleep respiratory rate
  • SpO2 level (oxygen blood saturation)
  • Stress level

Heart rate might be a bit of an odd one, as the current heart rate could be in either of HUAMI_EXTENDED_ACTIVITY_SAMPLE or HUAMI_HEART_RATE_MANUAL_SAMPLE). Probably best to write them into seperate fields (or perhaps add a tag - is_activity - to differentiate between them).

Starting easy and writing a query for the SpO2 info. The results are queried and then a dict is constructed from the result delinating fields and tags

results = []
devices = {}
query_start_bound = int(time.time())

# Pull out device names
device_query = "select _id, NAME from DEVICE"
res = cur.execute(device_query)
for r in res.fetchall():
    devices[f"dev-{r[0]}"] = r[1]


spo2_data_query = f"SELECT TIMESTAMP, DEVICE_ID, TYPE_NUM, SPO2 FROM HUAMI_SPO2_SAMPLE WHERE TIMESTAMP >= {query_start_bound} ORDER BY TIMESTAMP ASC"

res = cur.execute(spo2_data_query)
for r in res.fetchall():
    row = {
            "timestamp": r[0] * 1000000000, # Convert to nanos
            fields : {
                "spo2" : r[3]
                },
            tags : {
                "type_num" : r[2],
                "device" : devices[f"dev-{r[1]}"]
                }
        }
    results.append(row)

All subsequent queries will use the same structure - it should make generating LP quite simple.

verified

mentioned in commit utilities/gadgetbridge_to_influxdb@8b2021b0fa9b60eb50fcd38f2e4396f82587d03e

Commit: utilities/gadgetbridge_to_influxdb@8b2021b0fa9b60eb50fcd38f2e4396f82587d03e 
Author: B Tasker                            
                            
Date: 2023-07-31T13:23:14.000+01:00 

Message

Query SpO2 information from the database and reformat into a normalised dict structure project-management-only/staging#6

+35 -0 (35 lines changed)
verified

mentioned in commit utilities/gadgetbridge_to_influxdb@b666db0c3e6b5e64c0dc7ad4cf57a633a1b65194

Commit: utilities/gadgetbridge_to_influxdb@b666db0c3e6b5e64c0dc7ad4cf57a633a1b65194 
Author: B Tasker                            
                            
Date: 2023-07-31T13:25:19.000+01:00 

Message

Capture stress level project-management-only/staging#6

+15 -0 (15 lines changed)
verified

mentioned in commit utilities/gadgetbridge_to_influxdb@539dd54d0b5dac2578bb106e1c67d6dc8255edd8

Commit: utilities/gadgetbridge_to_influxdb@539dd54d0b5dac2578bb106e1c67d6dc8255edd8 
Author: B Tasker                            
                            
Date: 2023-07-31T13:39:47.000+01:00 

Message

Collect respiratory rate, PAI and battery level (project-management-only/staging#6)

+68 -2 (70 lines changed)
verified

mentioned in commit utilities/gadgetbridge_to_influxdb@cb60925433f542e79f3e48386420aa107ba47704

Commit: utilities/gadgetbridge_to_influxdb@cb60925433f542e79f3e48386420aa107ba47704 
Author: B Tasker                            
                            
Date: 2023-07-31T13:43:49.000+01:00 

Message

Collect manual, max and resting heart rate samples project-management-only/staging#6

These all live in different tables but otherwise share a schema, so we iterate through the types generating exactly the same schema of result, differentiated only by the value of tag sample_type which will be one of

  • manual
  • max
  • resting

The manual samples are taken at the interval set on the watch for non-activity HR measurements. If activity is detected measurements will be written into a different table (which we'll add support for shortly)

+27 -0 (27 lines changed)
verified

mentioned in commit utilities/gadgetbridge_to_influxdb@4074851b6c14f7abef95edba39b0d165f5ef6bb9

Commit: utilities/gadgetbridge_to_influxdb@4074851b6c14f7abef95edba39b0d165f5ef6bb9 
Author: B Tasker                            
                            
Date: 2023-07-31T13:51:54.000+01:00 

Message

Collect activity values project-management-only/staging#6

This collects data from HUAMI_EXTENDED_ACTIVITY_SAMPLE and exposes the following fields

  • intensity
  • steps
  • heart_rate
  • sleep
  • deep_sleep
  • rem_sleep

The value of tag sample_type will always be activity on these points

+34 -1 (35 lines changed)

We now have queries to extract the data - they'll may need tweaking once we have some data to go on, but they're only extracting the raw data as it's stored in the database so they shouldn't need much.

Can't really calculate any aggregates (such as cumulative daily steps) until we know what the data looks like.

So, it's time to move onto having this output to Influx.

verified

mentioned in commit utilities/gadgetbridge_to_influxdb@ae47313ebd704c1247578f2968c5c7010a8248fc

Commit: utilities/gadgetbridge_to_influxdb@ae47313ebd704c1247578f2968c5c7010a8248fc 
Author: B Tasker                            
                            
Date: 2023-07-31T14:07:40.000+01:00 

Message

Write data out to InfluxDB (project-management-only/staging#6)

Take the result set and turn them into InfluxDB points, writing out to InfluxDB was we go.

The client is in batching mode, so each write is into the client's buffer, flushed whenever the batch size (or flush interval) is hit with one final flush before function exit.

InfluxDB config is passed through a set of environment variables

INFLUXDB_URL = os.getenv("INFLUXDB_URL", False)
INFLUXDB_TOKEN = os.getenv("INFLUXDB_TOKEN", "")
INFLUXDB_ORG = os.getenv("INFLUXDB_ORG", "")
INFLUXDB_MEASUREMENT = os.getenv("INFLUXDB_MEASUREMENT", "gadgetbridge")
INFLUXDB_BUCKET = os.getenv("INFLUXDB_BUCKET", "testing_db")
+42 -4 (46 lines changed)

There's obviously nothing to output yet, but the script now attempts to turn results into points and write out to Influx. The default measurement name (controlled by env var INFLUXDB_MEASUREMENT) is gadgetbridge.

With the script, theoretically, complete the next thing to look at is containerising so that we can start to think about how best to schedule it.

verified

mentioned in commit utilities/gadgetbridge_to_influxdb@05dc1fe23fbf9798bd415c0f2378988ff9356f7d

Commit: utilities/gadgetbridge_to_influxdb@05dc1fe23fbf9798bd415c0f2378988ff9356f7d 
Author: B Tasker                            
                            
Date: 2023-07-31T14:23:11.000+01:00 

Message

Add simple dockerfile project-management-only/staging#6

+7 -0 (7 lines changed)

OK, so now that it's containerised, need to think about options around scheduling.

We could

  1. Just run it on a docker host via cron
  2. Set it up as a Kubernetes job
  3. Turn it into a prefect flow and deploy that way

Although I like the idea of number 3, I think I'd rather wait until I've had eyes on the data (because, ideally, we'll take advantage of Prefect's events support to trigger notifications).

For now, I think we'll create a Kubernetes cronjob.

Storing the credentials as secrets

kubectl create secret generic influxdbv1 \
--from-literal=influxdb_token='<redacted>' \
--from-literal=influxdb_org='<redacted>' \
--from-literal=influxdb_url='http://<redacted>:8086'

kubectl create secret generic nextcloud \
--from-literal=webdav_url='https://<redacted>/remote.php/dav/' \
--from-literal=webdav_user='service_user' \
--from-literal=webdav_pass='<redacted>'

Then defining a cronjob, referencing the secrets so they're available in the environment

apiVersion: batch/v1
kind: CronJob
metadata:
  name: gadgetbridge-to-influxdb
spec:
  schedule: "15,45 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: gadgetbridge-to-influxdb
            image: bentasker12/gadgetbridge_to_influxdb:latest
            imagePullPolicy: IfNotPresent
            env:

            - name: INFLUXDB_BUCKET
              value: "testing_db"
            - name: INFLUXDB_MEASUREMENT
              value: "gadgetbridge"

            - name: WEBDAV_PATH
              value: "files/service_user/GadgetBridge/"
            - name: EXPORT_FILE
              value: "gadgetbridge"
            - name: QUERY_DURATION
              value: "7200"              

            - name: INFLUXDB_TOKEN
              valueFrom: 
                 secretKeyRef:
                    name: influxdbv1
                    key: influxdb_token
            - name: INFLUXDB_ORG
              valueFrom: 
                 secretKeyRef:
                    name: influxdbv1
                    key: influxdb_org
            - name: INFLUXDB_URL
              valueFrom: 
                 secretKeyRef:
                    name: influxdbv1
                    key: influxdb_url
            - name: WEBDAV_URL
              valueFrom: 
                 secretKeyRef:
                    name: nextcloud
                    key: webdav_url
            - name: WEBDAV_USER
              valueFrom: 
                 secretKeyRef:
                    name: nextcloud
                    key: webdav_user                    
            - name: WEBDAV_PASS
              valueFrom: 
                 secretKeyRef:
                    name: nextcloud
                    key: webdav_pass                     

          restartPolicy: OnFailure

Applied with

kubectl apply -f gadgetbridge_cron.yml

Should run at 15 and 45 mins past. I've actually just adjusted it to run on the hour too so that I don't have to wait too long to be able to check the job's logs

Fuck sake...

After all that, the watch has arrived and proven not to be supported by Gadgetbridge.

Reading between the lines it looks like the manufacturer may have changed the underlying firmware.

Gadgetbridge shows it as unsupported when scanning.

Well, that's just balls isn't it.

mentioned in issue #35

Good news!

There's been some movement, Gadgetbridge have been able to add support for the Bip3. Direct support is, currently, pending release, but I've been able to add my watch as a GTS-2 in the meantime (thanks to @andy for flagging it to me).

Re-opening so that I can pick this back up (though, if there are any substantial changes needed, they should probably be filed in the dedicated project)

OK, so currently, I have the device connected and measuring HR etc, but haven't done much more. Here's how I got to where I currently am:

Ensure that Zepp is stopped (ideally, uninstalled)

Getting the auth code (must already have connected to the Zepp app at least once)

pip3 install huami_token
python3 -m huami_token \
--method amazfit \
--email <zepp login email> \
--password <zepp login password> \
--bt_keys

This will login and print the auth code and MAC address. Keep a note of both

Adding the device:

  • Menu
  • Debug
  • Add Test Device Manually
  • Set type to Amazfit GTS 2
  • Enter the MAC Address

The device will be added, but not connected.

Hit it's settings cog, then

  • scroll down to the bottom and tap Auth Key
  • Enter the auth code that was provided by huami_token
  • Come back out, tap on the device and choose Connect

Gadgetbridge should now successfully connect to the Bip.

Hit the settings cog again, and

  • Find Heart Rate Monitoring
  • Tap and enable Use heart rate sensor to improve sleep detection
  • Tap Whole day HR measurement and choose 1 hour (the watch will sample more regularly if you're active)

The next step, then, is getting auto-DB exports enabled again.

  • Hit the Menu
  • Choose Settings
  • Scroll down to the Auto export location
  • Tap Export Location
  • Browse into a Nextcloud location
  • Provide gadgetbridge as the output file name
  • Back in the menu, tap Auto export enabled
  • Set the export interval to 1 hour

Whilst in the settings menu, it's also worth enabling Auto fetch activity data so that Gadgetbridge attempts to connect to the watch whenever your phone is unlocked (which, ideally, will mean data is synced sooner)

Then, as described above I've shared the relevant Nextcloud directory with a service user.

With a few minor tweaks/fixes the script works - at least in so much as it's now writing battery levels into InfluxDB.

Heart-rate measurements though, are notably absent.

The script attempts to pull HR data from

HUAMI_HEART_RATE_MANUAL_SAMPLE
HUAMI_HEART_RATE_MAX_SAMPLE
HUAMI_HEART_RATE_RESTING_SAMPLE

But these are all empty. Gadgetbridge is showing HR data on it's graph though, so it's being recorded somewhere.

To try and find out where, I've dumped the Sqlite DB to text:

sqlite> .output /tmp/sqlite_dump
sqlite> .dump
sqlite> .exit

It contains lots of these

INSERT INTO MI_BAND_ACTIVITY_SAMPLE VALUES(1692871268,1,1,-1,0,1,91);
INSERT INTO MI_BAND_ACTIVITY_SAMPLE VALUES(1692871270,1,1,-1,0,1,90);
INSERT INTO MI_BAND_ACTIVITY_SAMPLE VALUES(1692871271,1,1,-1,0,1,90);
INSERT INTO MI_BAND_ACTIVITY_SAMPLE VALUES(1692871272,1,1,-1,0,1,89);

So, we need to fetch some data from there

sqlite> .schema MI_BAND_ACTIVITY_SAMPLE
CREATE TABLE IF NOT EXISTS "MI_BAND_ACTIVITY_SAMPLE" 
("TIMESTAMP" INTEGER  NOT NULL ,
"DEVICE_ID" INTEGER  NOT NULL ,
"USER_ID" INTEGER NOT NULL ,
"RAW_INTENSITY" INTEGER NOT NULL ,
"STEPS" INTEGER NOT NULL ,
"RAW_KIND" INTEGER NOT NULL ,
"HEART_RATE" INTEGER NOT NULL ,
PRIMARY KEY ("TIMESTAMP" ,"DEVICE_ID" ) ON CONFLICT REPLACE
) WITHOUT ROWID;

Whilst we're here, the other tables containing data are

$ grep "INSERT INTO" /tmp/sqlite_dump  | cut -d\  -f3 | sort | uniq
android_metadata
BATTERY_LEVEL
DEVICE
DEVICE_ATTRIBUTES
MI_BAND_ACTIVITY_SAMPLE
USER
USER_ATTRIBUTES
verified

mentioned in commit utilities/gadgetbridge_to_influxdb@eb87654daf52bf078bb496c404b8b79a86676989

Commit: utilities/gadgetbridge_to_influxdb@eb87654daf52bf078bb496c404b8b79a86676989 
Author: B Tasker                            
                            
Date: 2023-08-24T12:03:52.000+01:00 

Message

Collect periodic heart-rate and steps samples (identified in jira-projects/MISC#34)

+26 -1 (27 lines changed)

That's done the trick - I can now retrieve HR data with

SELECT "heart_rate" FROM "testing_db"."autogen"."gadgetbridge" WHERE time > :dashboardTime: AND time < :upperDashboardTime: AND "sample_type"='periodic_samples'

So, the next thing to do is to set up scheduled runs of the container. As before, I'm going to do this with a Kube cron job.

The secrets already exist from the previous setup.

I've updated the docker image to use the latest version of the script, but I'm going to have my cronjob pull from my local registry rather than docker hub - I've noticed a few crons failing recently, I guess Docker have been tightening their rate-limits further as part of their crusade against having users.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: gadgetbridge-to-influxdb
spec:
  schedule: "0,15,45 * * * *"
  failedJobsHistoryLimit: 5
  successfulJobsHistoryLimit: 5
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: gadgetbridge-to-influxdb
            image: registry.bentasker.co.uk/utilities/gadgetbridge_to_influxdb:0.2
            imagePullPolicy: IfNotPresent
            env:

            - name: INFLUXDB_BUCKET
              value: "testing_db"
            - name: INFLUXDB_MEASUREMENT
              value: "gadgetbridge"

            - name: WEBDAV_PATH
              value: "files/service_user/GadgetBridge/"
            - name: EXPORT_FILE
              value: "gadgetbridge"
            - name: QUERY_DURATION
              value: "7200"              

            - name: INFLUXDB_TOKEN
              valueFrom: 
                 secretKeyRef:
                    name: influxdbv1
                    key: influxdb_token
            - name: INFLUXDB_ORG
              valueFrom: 
                 secretKeyRef:
                    name: influxdbv1
                    key: influxdb_org
            - name: INFLUXDB_URL
              valueFrom: 
                 secretKeyRef:
                    name: influxdbv1
                    key: influxdb_url
            - name: WEBDAV_URL
              valueFrom: 
                 secretKeyRef:
                    name: nextcloud
                    key: webdav_url
            - name: WEBDAV_USER
              valueFrom: 
                 secretKeyRef:
                    name: nextcloud
                    key: webdav_user                    
            - name: WEBDAV_PASS
              valueFrom: 
                 secretKeyRef:
                    name: nextcloud
                    key: webdav_pass                     

          restartPolicy: OnFailure

Applying

kubectl apply -f gadgetbridge_cron.yml

Rather than waiting ~15 mins for the next run, have created a job from the cronjob

kubectl create job --from=cronjob/gadgetbridge-to-influxdb gadgetbridgemanual1
kubectl logs job.batch/gadgetbridgemanual1
/tmp/tmpm2ol29l6

No exceptions is a good sign

I've created a task in Tasker which sends intent nodomain.freeyourgadget.gadgetbridge.command.TRIGGER_EXPORT:

<TaskerData sr="" dvi="1" tv="6.1.32">
    <Task sr="task2">
        <cdate>1692873479721</cdate>
        <edate>1692873633998</edate>
        <id>2</id>
        <nme>Trigger Gadgetbridge Export</nme>
        <pri>100</pri>
        <Action sr="act0" ve="7">
            <code>877</code>
            <Str sr="arg0" ve="3">nodomain.freeyourgadget.gadgetbridge.command.TRIGGER_EXPORT</Str>
            <Int sr="arg1" val="0"/>
            <Str sr="arg2" ve="3"/>
            <Str sr="arg3" ve="3"/>
            <Str sr="arg4" ve="3"/>
            <Str sr="arg5" ve="3"/>
            <Str sr="arg6" ve="3"/>
            <Str sr="arg7" ve="3"/>
            <Str sr="arg8" ve="3"/>
            <Int sr="arg9" val="0"/>
        </Action>
    </Task>
</TaskerData>

In Gadgetbridge:

  • Settings
  • Scroll down to Intent API
  • Enable Allow database export

I can now use the Tasker task to manually trigger an export of the database.

I've triggered the Kube job again - we should get up to date HR data in the DB

Although data's coming over, something's still not quite right.

Gadgetbridge doesn't seem to be getting step data at all - it's claiming 0 steps today, whilst the watch is reporting around 1600.

The band doesn't seem to be taking a HR measurement at the configured interval - I set to to 10 minutes, but the points that came through in the database are an hour apart (looks like it takes multiple reads each time)

sqlite> select * FROM MI_BAND_ACTIVITY_SAMPLE ORDER BY TIMESTAMP DESC LIMIT 10;
TIMESTAMP|DEVICE_ID|USER_ID|RAW_INTENSITY|STEPS|RAW_KIND|HEART_RATE
1692877654|1|1|-1|0|1|-1
1692877653|1|1|-1|0|1|-1
1692877652|1|1|-1|0|1|-1
1692877651|1|1|-1|0|1|-1
1692877650|1|1|-1|0|1|76
1692872425|1|1|-1|0|1|100
1692872423|1|1|-1|0|1|100
1692872422|1|1|-1|0|1|100
1692872421|1|1|-1|0|1|100
1692872420|1|1|-1|0|1|100

I think... rather than chasing this down, it'd be wise to switch to the nightly build of Gadgetbridge and see how the "official" support behaves - it may well already have been addressed (plus, we'll get sleep, stress and PAI support)

Details on switching to Nightly are here

Have switched over - now have the option to collect Stress information, and PAI is showing up in the graphs.

Steps are still 0 though

sqlite> select * FROM MI_BAND_ACTIVITY_SAMPLE ORDER BY TIMESTAMP DESC LIMIT 10;
1692881653|1|1|-1|0|1|-1
1692881652|1|1|-1|0|1|-1
1692881651|1|1|-1|0|1|-1
1692881650|1|1|-1|0|1|-1
1692881649|1|1|-1|0|1|85
1692881648|1|1|-1|0|1|-1
1692881647|1|1|-1|0|1|82
1692881645|1|1|-1|0|1|82
1692881644|1|1|-1|0|1|82
1692881643|1|1|-1|0|1|82

Dumping to text doesn't show them appearing anywhere else. I might turn on dev logs in Gadgetbridge and then trigger a sync to see whether it's receiving and objecting to them

The logs show it fetching Stress and PAI data

14:01:39.283 [Binder:22963_1] DEBUG n.f.g.s.d.h.o.AbstractFetchOperation - Performing next operation fetching manual stress data
14:01:39.554 [Binder:22963_1] DEBUG n.f.g.s.d.h.o.AbstractFetchOperation - Performing next operation fetching pai data

But there's no mention of steps.

The upstream issue definitely mentions steps being collected

So far, I could synchronize the sleeping data, the steps data, vibrate the watch and check the beat rate

I'm running the same firmware version (v2.3.6.05) as the issue opener, so I'm guessing GB might only support retrieving steps that are recorded against an activity.

I'll pivot back to using Zepp for the time being, but should probably dig a bit deeper into this as it's so close to what I want.

I commented on the upstream ticket to note I'd seen some issues - have tried what they suggested, but no dice so far.

I'm probably not going to have time to play with this for a bit, so switching back to Zepp for now.

verified

mentioned in commit utilities/gadgetbridge_to_influxdb@97410101f2b568635e8b6263a0112fed0a1bb66b

Commit: utilities/gadgetbridge_to_influxdb@97410101f2b568635e8b6263a0112fed0a1bb66b 
Author: B Tasker                            
                            
Date: 2023-08-25T08:52:12.000+01:00 

Message

Fix: Huami samples seem to use millisecond epochs

This adjusts timestamp calculation for the relevant tables.

We could probably autodetect this just before the write - if a timestamp is significantly greater than allowed, we could divide it by 1000

cc jira-projects/MISC#34

+13 -6 (19 lines changed)

We're up and running!

For whatever reason, Gadgetbridge wasn't fully communicating with the watch. I unpaired it from the phone, deleted from GB and then re-added. Everything started working.

Details in upstream ticket

I've got step data coming into the DB

Screenshot_20230825_175207

Interestingly, it's point-in-time rather than cumulative.

Heart rate looks a bit questionable

Screenshot_20230825_180120

But, I've seen elsewhere that the watch uses 255 and 254 for missed/failed reads. Need to filter those out (either in queries, or in the script)

Raised utilities/gadgetbridge_to_influxdb#1 to follow up on this

I'm going to close this off now and move to tracking under the code's project.

I've marked it as public, so it should soon appear at https://projects.bentasker.co.uk/gils_projects/project/utilities/gadgetbridge_to_influxdb.html

mentioned in commit sysconfigs/bumblebee-kubernetes-charts@c8eeb8b5925b2a7d4f824fd587db2d4b5b402110

Commit: sysconfigs/bumblebee-kubernetes-charts@c8eeb8b5925b2a7d4f824fd587db2d4b5b402110 
Author: ben                            
                            
Date: 2023-08-25T09:00:31.000+01:00 

Message

Set Gadgetbridge cron live jira-projects/MISC#34

This moves the container from writing into the testing db to writing into the live DB.

+4 -2 (6 lines changed)