Hello all! We are reaching the end of our second week and strong progress has been made! Firstly Max and Arshan have managed to parse the GPS coordinates from the decoder software output text file and send them to IBM’s IoT Bluemix node via a Python MQTT script. After coming onto Bluemix the messages are routed to Twitter and Arshan and Max are now experimenting with routing messages conditionally, based on incoming Tweets from people following the project, using custom Javascript functions within Node-RED.

Yihan has been working on mapping the GPS data received from the Pi in the Sky (PITS) and manipulating it via a different software so that the location information can be presented on the screen in a more accessible and intuitive manner. He has successfully converted raw data (plate text file) that is received, to a more commonly used data type, i.e. GPX or Keyhole Markup Language, known as KML. With those GPS files, we are able to map and display geographic information on map viewer software, for example Google Earth. He is now working on how to connect the individual location points into a travel path in order to track our flight and make a video on Google Earth so that we can perform a comprehensive outlook of our discovery journey.
 
The screen snippet ‘Capture2’ is the first GPS reading we received when we calibrated our sensors in the Electrical Engineering Department at Imperial College. Test point 2 is the original data but point 3 is the point when we deliberately raised its height.
 

 Capture 2 - First GPS Reading during calibration of sensors

Capture 2 - First GPS Reading during calibration of sensors

The screen snippet ‘Capture’ is the path we linked on two peaks. It demonstrates that we can link any points with a 3D display and present it on a map.

 Capture - 3D Path between two points in question

Capture - 3D Path between two points in question

Wei has been working hard with the assistance of me and Diyar in integrating the sensors with the Jessie Lite interface. The light sensor is still playing up but we have replaced the magnetometer/accelerometer and now we are getting much more accurate readings! Wei also has successfully been able to operate and extract readings from all the sensors at the same time while they are all connected on the same breadboard which has been strong progress. The next step is to be able to use the ‘Transmit’ C code with the successful inclusion of the Python library in order to send the sensor readings across to another terminal so as to upload to Bluemix. 

On an equally positive note the LoRa board has arrived! The LoRas are being used mainly for two-way (half-duplex) communication between the ground and the balloon. This enables us to send tweets up to the balloon and the balloon can send telemetry and images down to the ground. Configuration and installation have been successful on both the transmitting and receiving Pis. Now we need to test if useful data can be sent over the Pis since LoRa transmissions cannot be decoded by our SDR. This should be done by the end of the week if everything goes smoothly. 

I have been working with Diyar on the website, building functionality and design. We decided to merge the ‘Sentinel’ page with the ‘Mission’ page in order to streamline the experience for users. We also added several images of the sensors and GPS dongle and included relevant explanations and Diyar added images to the 'Team' page. We decided upon the URL: www.projectedge.net and the website has been officially launched!

On a final note, the Chemical Engineering Department has not gotten back to us yet and furthermore their store doesn’t officially stock any type of gas. Therefore the best course of action will be to purchase a large canister of helium from the BOC Group at Imperial for a cost of around £120.

- Peter

 

Usmaan:

I have noted that it is no longer necessary to have a decoding procedure in order to retrieve the transmitted images. Therefore this has allowed me to focus on the cognitive and data processing aspects of the project since retrieval is no longer a hurdle. I have been considering comparing the received data to a geographical database of information about Earth. On comparison with this information, the probe may tweet quantities relatable to Earth-dwellers of what it is experiencing, for example: “Feels like riding Stealth, Thorpe Park up here!” if the G-forces experienced are within a certain error of this ride. The same principle may be applied to luminosity, pressure and temperature readings, for example: “Feeling the pressure drop of climbing Kilimanjaro x many times!” etc. The challenge now is to implement Watson’s APIs optimally in order to achieve the functionalities described above in style!