Advance Feature Settings

The Advance Feature Settings contains a group of features for fine tuning of LYNX for specific use-cases.

Data Points per Packet

This (dpPerPacket) is a configurable parameter for deciding the data points to be pushed to cloud in a given packet. To understand this, let's first go through a typical run cycle of LYNX.

1) LYNX reads all field instruments and sensors as defined in Feature Settings and stores them in local variable. Subsequently, data scaling may be applied on inputs.

Depending on the settings, LYNX may read 1, 5 or even 100+ variables in a given cycle.

Now, these variables have to be packaged in a payload (packet) and sent to cloud. Depending on protocol being, server capability, it may or may not be possible to send all data points in a single API call to cloud.

dpPerPacket decides how many data points (variabled) should be packed together in a single payload packet and sent to cloud.

As an example, let's assume LYNX is reading 10 MODBUS registers and 3 analog inputs (total 13 variables).

Data Points Per PacketRemark

1

Each variable will be sent as independent packet. So, there will be 13 API calls based on cloud protocol (MQTT, HTTP etc)

5

There will be 3 cloud push with packets of 5, 5 and 3 variable each.

16

All 13 variables will packed together in a single packet. It is ok to set dpPerPacket to a number higher than variables being read.

Considerations for settings of dpPerPacket

Depending on cloud server, the API may be accepting just one value at a time, in which case, dpPerPacket of 1 may be required. This setting typically consumes more bandwidth.

In case there are quite many variables (say 100), some of the protocols like MQTT (or due to server restrictions), all variables cannot be sent together. So, packets of reasonable size (say, 16 or 20 data points per packet) may need to be created as per use-case.

In case of CSV, Text File push through HTTP or FTP, it may be mandatory that ALL data points have to be processed together only. So, in this case, dpPerPacket has to be set to a value higher than number of data points being read. For example, to read 13 data points across industrial sensors, dpPerPacket has to be set to 16.

LYNX displays data point per packet on screen as below:

Auto Reboot Cycle

In some use-cases, it is required (and advisable) that LYNX is rebooted periodically to avoid the possibility of hanging. The parameter, Auto Reboot Cycle, defines number of main loop cycles LYNX will run before it reboots itself. Please note that this is not clock time. So, this parameter has to be set based on loopDelay being used and preferred frequency of auto-reboot.

For example, if loopDelay is set to 300 seconds (5 minutes) and AutoRebootCycle is set to 36. LYNX will reboot itself after approx 3 hours (5 minutes x 36 = 180 minutes).

By default, auto-reboot Cycle is set to 0, which means LYNX will not auto-reboot.

Enable Debug

This flag is used to enable or disable Debug output of LYNX for troubleshooting and development purpose. For most of the applications, it can be left as enabled. If you are trying to run LYNX with a very small loopDelay (to capture data at high rate), this flag should be disabled.

RTC Settings

As per the product variant, LYNX provides in-built clock (RTC) to maintain local time. RTC may be required in the following typical use-cases:

  • TimeStamp is required by server/protocol along with data points.

  • Data backfill in case of network loss. TimeStamp is stored in local storage along with timestamp tags.

  • Running LYNX in clock Time-Sync manner

The LYNX IoT gateways without RTC cloud do not know or maintain current time. They are designed to send data at a given time interval. All data points are sent without time stamp. All latest IoT platform take present UTC time as per data received on server.

Enabling RTC

RTC is enabled by check box Enable RTC

RTC check box should be enabled only for supported devices. Otherwise, LYNX may crash during RTC setup.

LYNX message after RTC setup

TimeSyncRun

TimeSyncRun is advanced version of loop delay, to run LYNX in a time-sync manner.

For example, suppose we want to send data to cloud at a 5 minute interval, synced with clock (10:00, 10:05, 10:10). To achieve this, TimeSyncRun is set to 5.

It is different from using loopDelay of 300 seconds, where data is first sent on bootup and 5 minutes subsequently. So, you may receive data at clocktime 10:02, 10:07, etc., depending on when LYNX was booted. With TimeSyncRun, LYNX will send first data only when clock ticks to the given 5 minute time.

The unit of TimeSyncRun is minute. So, we can run LYNX loops in multiple of minutes only, without further granularity.

Default value of TimeSyncRun is zero, and loopDelay is used for looping cycle. If TimeSynRun is 1 or higher (and RTC is working ok), it will be used.

Due to any reason, if RTC is not working, then TimeSyncRun parameter will be ignored and loopDelay will be used. So, it is advisable to set loopDelay to value of TimeSyncRun*60 (seconds).

LYNX displays local time and TimeSync information on OLED display.

Reading and Updating RTC Settings

RTC setting for timesync and timezone is performed by Reading and Updating RTC Settings.

Clock Time Synchronization Mechanism

LYNX uses the following mechanism to sync clock at bootup, depending on network use.

  1. 4G/LTE network: LYNX uses modem clock time as current local time.

  2. Wi-Fi/Ethernet: LYNX takes time for NTP server at bootup. This can be enabled/disabled by checking "NTP Sync at Bootup". Default server of pool.ntp.org is used to get UTC time. It can be updated by NTP pool server entry.

Time Zone Minutes Settings

TimeZone Minutes is set as offset of local timezone from UTC. If local time is ahead of UTC, positive number has be given. For example, India is +5:30 time zone. So, TimeZoneMinute of 330 (minute is applicable).

This parameter is used in the following time adjustments:

When time is synchronized with NTP servers, Time Zone Minutes is used to derive local time.

Based on selection of timeStamp format, UTC or local time may be used in payloads. When local time is taken from 4G modem, UTC time is derived Time Zone Minutes.

Default Time Zone Minutes is taken as 330 (IST) if no value is given.

Daylight Savings

At present, LYNX does not support Daylight saving adjustment. So, fixed TimeZone is applicable.

Depending on network usage and server settings, timeStamp format should be selected. Please contact YuDash technical support to cover this use-case.

We are looking for use-case and solution to handle Daylight Savings in a graceful manner. We will be pleased to discuss the requirement and suggestion in this regard.

LYNX does NOT support Daylight setting at present.

Network Failure Handler Settings

LYNX Data logger variants support advanced network failure handlers to avoid data loss in case of network failure. In case there is a network failure, LYNX can store the payload in local storage which is back-filled in server. This is termed as NFH (Network Failure Handler) feature in LYNX.

Following are the pre-requisites for NFH:

  1. SD card should be connected and mounted in LYNX

  2. RTC should be enabled and working.

  3. TimeStamp setting in payload should be enabled.

  4. The server should support and accept historical data points.

Assuming that the above requirements are met and NFH is enabled, further behaviour is as follows:

In case the cloud server is not accessible (due to potential network failure), LYNX tries to reconnect to network (4G, Ethernet, Wi-Fi). If not successful in the given attempts, it will enter NFH mode.

In NFH mode, LYNX will store the payload in local storage as separate files (in pre-defined directory).

LYNX will run for cycles as defined by maximum NFH Runs and then reboot.

After reboot, it will again try to connect to network. If successful, it will send historical payloads along with current data points. The stored payload files are deleted after successful data push to cloud.

If network connect fails at boot-up, it will again store local files for given NFH runs until next reboot.

As network reconnect attempt may take 1-2 minutes (for 4G/LTE), there are chances that a few data points may be lost in case data is pushed every minute.

Network check in NFH mode is done only at reboot.

-- pending - OLED images/videos when NFH is running --

SD Card

LYNX data logger supports external SD card as peripheral for local data storage. It is used as a temporary storage for Network Failure handler. It can also be used for local logging. Following are the steps to use SD card:

  1. SD card has to be inserted in the LYNX SD card slot prior to booting up.

  2. SD card check box has to be enabled under NFH settings.

When SD card is enabled, LYNX mounts the SD card during setup.

LYNX message after SD card was mounted successfully during setup

-- OLED display for SD card pending pending --

LYNX supports SD card of up to 16GB. It is preferable to use 8GB SD cards. Few variants of 16GB SD cards (or higher capacity) are not mounted in LYNX. In this case, the following error is displayed during setup:

-- OLED display for SD card error pending--

NFH SD card Directory

This is the directory within the SD card where temporary files are stored for NFH.

The default directory for NFH is /nfhBkpDir . This directory is created by LYNX if it is not present in the SD card.

Typically, it is not required to change this directory. If at all needed, it should be of exactly 9 characters and should be prefixed by /. For example /customDir Any other character length input will be ignored by LYNX.

For successful NFH operations, it is mandatory to have RTC, SD card and valid timestamp working. Otherwise, NFH will not work as desired.

Network Fallback

By default, only one network (4G/LTE, Ethernet or Wi-Fi) is selected in network setting. With Network Fallback feature, LYNX offers fallback to second network at bootup time. At present, fallback from 4G/LTE to Ethernet is available.

LYNX is not intended to work in multi-network failover modes. For these cases, it is suggested to use advanced router and cloud access is provided through Ethernet.

YuReCon

YuDash supports YuReCon (YuDash Remote Configuration) for in-field deployed devices. Please refer to YuRecon documentation for further details and architecture.

YuReCon is enabled within Advance Feature Setting as below:

Last updated