Monday, 11 February 2019

Measuring Upper Atmosphere Radiation

Aims

Last Saturday AREG flew another high altitude balloon for the SHSSP19 program  As well as two imaging payloads using the Wenet downlink, this year we decided to measure ionising radiation during the balloon flight.  Cosmic radiation levels increase as we ascend from sea level, losing the protection of our atmosphere.  Cosmic rays (mainly high energy particles from our sun and beyond) and gamma rays (high energy photons) present significant hazards to humans (and electronics) in space.  Our experiment was designed to repeat measurements of charged particle distribution in the atmosphere first carried out by Georg Pfotzer in the 1930s.

We used two radiation detectors on the Horus-52 flight.  The RD2014 solid state (SS) detector uses a set of PIN diodes (plus amplifier and limiter) that are shielded from visible light but respond to high energy photons or particles.  This product has now been superseded  but the manufacturer has very similar modules.  A Geiger Muller tube was also used to measure ionising radiation.   This requires a high voltage power supply, but fortunately uses minimal power.

Hardware

The GM schematic is a synthesis of various circuits from the web and was fairly easy to build.  An oscillator drives the transistor used in the charge pump.  The basic operation is easy to describe. Current starts to flow through the 10 mH inductor when the transistor is turned on.  But inductors don't like sudden changes in current.  The voltage across the inductor is proportional to the derivative of the current (V = L dI/dt), so when the transistor turns off we get a large voltage spike. This turns on the first diode and current flows into the high voltage capacitor.   A couple more diodes and capacitors make a voltage doubler.  Since the regulation of this power supply is poor, a series of zener diodes are used to stabilise the voltage (before doubling).  The voltage applied to the detector is a little over 450V. 

The GM tube itself is filled with an inert gas and normally acts like a good insulator.  However a high energy particle or photon ionises some gas molecules and initiates a discharge from anode to cathode.  In our circuit this small current is amplified by a transistor and allows current to flow briefly from the (raspberry pi) 3V supply to flash a LED and drive a falling edge to an IO pin of the RPI, causing a interrupt.  The solid state detector output was connected to another GPIO pin of the RPI.


 The left hand figure shows the overall arrangement.  The "radiation" payload in the upper half of the figure communicates via WiFi with the Wenet payload in the lower half.  High energy particles detected by either of the sensors are counted by the radiation RPI zero. Cumulative counts are stored locally on the uSD card, and also downlinked in the Wenet telemetry stream (with the camera images).






Software

Existing Wenet code, allowed a fairly simple python script to be used on the radiation payload to count interrupts and send the current count values each few seconds via a UDP packet.  The Wenet RPI was configured to act as a WiFi Access Point (AP), collecting these secondary payload packets for inclusion in the downlink telemetry within the amateur radio 70 cm band.   On the ground, the secondary packets can be readily extracted and stored for real-time processing and display.

The two payloads are separated by some metres on the balloon train so WiFi seemed like a good method for these boxes to communicate.  We only found late in the testing that the RPI AP software was not very reliable. In addition the Wenet RPI is already heavily loaded and the AP processing can cause the processor to overhead.  Some last-minute code (mainly from Mark, VK5QI) for AP suspension and restarting overcame these issues. 




Results 


The final payload is shown on the right.  From the left, the GM tube, HT supply and RPI zero are mounted on a baseboard, with the RD2014 near the bottom edge.   Three Lithium AA cells provide power to a small up convertor, as the RD2014 needs to run at 5V and this also suits the RPI. All of this fits in a foam box to provide insulation.



The first figure below shows the results from data collected on the uSD card and recovered after Horus 52 had landed.  The blue and green plots are cumulative radiation counts from the GM tube and SS detectors respectively.  The GM tube is more sensitive (much larger number of counts), probably due to its larger area.   
























Real-Time Radiation Counts versus Altitude


To understand the variations in radiation during the flight, the plots on the right show counts each 2 minute period. It can be seen that the radiation levels  vary significantly during the flight.









To get the best picture of radiation versus altitude we can use data collected in real-time during the flight.   While the stored data in the payload didn't contain any altitude information, the ground station software stored altitude readings from the Wenet GPS, as it collected radiation data packets.  This allows the radiation counts per 120 seconds to be plotted versus altitude.  Now the characteristic "Pfotzer Maximum" can be seen more clearly around 15 to 20 km.  

Friday, 3 August 2018

How Fourier Transforms Work


In a recent talk I tried to explain how Fourier Transforms can be used to estimate the frequency content of signals (ie. the spectral content).  Normally we would use maths to illustrate these concepts - but that doesn't suit everyone,  so here's an attempt with only animated images!


First let's consider a signal which contains only one frequency component.  We consider sampled signals (i.e. discrete-time signals) - the upper plot on the left shows a sin wave sampled at regular time intervals.  In fact any periodic signal can be decomposed into sinusoidal components but for simplicity we will only consider  sin waves. 









Since each sample has a magnitude and phase, in the lower left plot we show a polar version of the sample sequence.  This rotating "phasor" signal is actually a more generic representation than plotting the samples versus time.  The original signal can be recovered by taking the horizontal component of the polar diagram.  Likewise the vertical part represents another sin-wave, with 90 degrees phase difference.  (In radio models, these components are called the Inphase and Quadrature signals.)





Of course if the signal frequency is lower, the plot will show more samples per cycle (see magenta example). 

Noise has been added to the sin-wave shown on the LHS.  (In the polar plot, independent noise samples have been added in both the I and Q dimensions.)   






From now on we will drop the time-domain plots.  Our aim is to estimate the amount of each sin-wave component in a sampled signal.  Assume our input signal is shown in blue phasor samples and will be compared to 4 "references" plotted below in black.  Ref2 is twice the frequency of Ref1, the next is three times etc.   A Discrete Fourier Transform (DFT) would normally contain many more frequency references, but four is enough for this illustration.  

   
 You might notice that the blue input is the same frequency as Ref2.  How can we use the reference phasors to estimate the input spectrum?  

Assume that for each new sample, we multiply the signal magnitude by the reference magnitude (which is 1) and that we take the difference between their phases. This gives a product phasor for each new sample which will be added to the previous product (as in vector addition).  The result is shown below, with the references shown in different colours for clarity.  




On the RHS sub-plot, observe that the magenta phasor (or vector) sum grows steadily in length during the DFT.   (The phase difference between blue and magenta is zero.)   However the other product sums "curve back" on themselves and their net length is zero at the end of the DFT.  The RHS sub-plots are autoscaling so we can see the initial behaviour more clearly.  The bottom right sub-plot shows the net length of each phasor sum, during the DFT.  This shows, in more conventional plotting style, that the spectral amplitude is zero for all components except Ref2. 



If you have followed the figures above -- well done!  But is this example too contrived - what happens with noise, or if the input signal is a slightly different frequency?  On the right, we see it still works!  Now the magenta vector sum is slightly curved - but its length is still much greater that the other components.  (We say the input frequency is no longer 'bin-centred'.) 








Needless to say, I encourage you to look at the mathematical description of DFTs.  While the DFT takes a lot of numerical processing, thanks to the great work by Cooley and Tukey in the 1960's we now have the very efficient Fast Fourier Transform (FFT). This forms the basis of signal processing in many modern communications systems (and much else as well!)   

Sunday, 22 April 2018

Moving Away from Press-To-Talk?


From the start,  radio amateurs and many others have used the "press-to-talk" approach for voice communications: transmission is initiated by pressing a button,  talking continues for a period of time and then the operator invites the other party (or parties) to reply while he or she receives.  This approach is currently used for both analog and digital modes.  It allows simpler equipment and the same communication channel can be be reused for communication in either direction.  Obvious disadvantages include the inability of the receiving station to interrupt or reply during an 'over' and lack of feedback to the sender about the reception of their signal (until the next over).

Can we move away from PTT to achieve more natural methods of radio communication, say over HF channels?  Cellular systems achieve duplex operation via rapid time multiplexing, or by the use of multiple frequency allocations (TDD or FDD).  To avoid significant complexity, a time-division scheme with longer frame times could be envisaged as follows:   

  • We assume a software-defined (at least partially) approach whereby speech packets are digitised and only transmitted after the voice-activity-detector (VAD) indicates speech is present.    These packets are transmitted during an allocated period of the time frame. For example if A initiates the call, her packets could be transmitted during the first part of the frame, after which A will receive packets from B, until the end of the frame.  
  • We consider an adaptive scheme where the person who is talking the most, will get a large portion of the Tx time.  So if B is mainly listening to A, he might be allocated just the last 10% of the frame for his transmission - which is just enough for some interjections or brief comments, plus (digital) feedback on signal quality,  including how much of his speech is queued and waiting to be sent.  
  • This quasi-duplex scheme requires a cooperative approach from the operators -- they would be given an indication of how much of their speech is waiting to be sent, and how much from the other end is waiting.  Polite operators would stop talking when the other party wants to say something! 
  • What frame period should be used?  Longer frames (eg >10 seconds) could allow greater interleaving and robustness, but of course latency will become an increasing problem for two-way communications.  Short frames (eg a few seconds) will suffer a higher overhead from Tx/Rx switching, guard times, etc and need tighter synchronisation requirements.  Of course we envisage the use of source and channel coding (eg ~FreeDV) so need to avoid very short frames durations to suit these algorithms.  


Given the likely short pauses and delays in speech delivery under the scheme above it is hard to say how well it will work.  I've therefore created a small python simulation of this "ADDS" scheme (see figure below) using UDP packet transmission between two linux PCs, with sound-cards and headsets.  (ADDS stands for "adaptive delayed duplex speech".) The VAD is very simple and just checks the maximum sample amplitude in a block.  This simulation is rather basic, with no source or channel coding, using 8 bit samples, at 8 kHz.  The percentage of transmission time at each terminal can only be adjusted manually at present and while the local queue length is visible, the remote queue length is not. 





The results seem encouraging so far.  Using a frame period of 3 or 4 seconds, the pauses in conversation are obviously noticeable, but not too annoying.  On the other hand, natural speech contains silence periods.   These are not transmitted, so speech from the other end (that has been waiting) may be delivered faster than it was spoken.  The simulation code is on github.

This method would take some effort to implement over a radio channel. Frame sync could use NTP, as for other recent digital modes like WSJT.  For simplicity the frame allocations could be fixed, e.g. 50% of the frame for each party, but performance would suffer significantly.  The adaptive scheme requires the control portions of the frame to be particularly robust which will be challenging on a flaky channel.  It would be sensible to always send the control transmissions in the same part of the time frame. For example A's status and control information (including ID) could be sent in the first (say) 10% of the frame, B's status in the last 10%, with the rest allocated to speech (probably by the calling party A), as required. 

Friday, 22 December 2017

Gravity Simulations for Year 6/7 Students

This note discusses Python programming for year 6/7 students at a local primary school.  The activity was part of a STEM program coordinated by CSIRO.

Python provides an excellent option for introducing programming in schools and has been widely adopted in the UK.  After discussions with the class teacher, we decided to use Python within the area of Space Science.  In particular the lessons aimed to introduce the elements of programming in a Python development environment using "Turtle Graphics", with a focus on simulating the motion of heaven bodies subject to gravity.

Primary students can make very good progress with programming concepts and are keen to learn.   However the motion of bodies subject to acceleration is normally a high-school topic.  The examples below aim to show that using first-order equations can put this topic within reach of primary students.  We need the following "maths":
  • distance travelled = velocity * time interval    
  • change in velocity = acceleration * time interval
  • force = mass * acceleration 

The last equation, which is Newton's famous second law, is not actually used in the examples below but provides a great way to talk about the concepts involved, including large rockets that can provide huge thrust forces!   The second equation is basically the definition of acceleration, which might be a novel concept for younger programmers.  We are all familiar with the first equation and strictly speaking this only applies when the velocity is fixed.  Our simulation approach is to take many small time steps and calculate the object's position and velocity at each step.  As long as the velocity is changing "smoothly" this simulation approach gives a good approximation to the formulas normally used (eg s=ut + at^2/2 etc).  



Graphics output from gravity_order1.py
Anyway, two examples are shown below using this approach.  Note that to further simplify the problems we assume that the force of gravity only acts vertically and affects vertical motion, with no acceleration on the horizontal direction.

In the first example a ball is thrown upwards and falls due to gravity.  The graphical output is shown on the left.   The code (below) is very simple:  after some initialisation statements, a loop is used to evaluate the ball's vertical velocity and position at each time step.











From lander.py 

As a second example consider a lunar lander simulation, where the thruster on the lander can be toggled on and off by pressing the 'up' key. A sample output is shown on the right.  Initially the lander has zero vertical velocity and a small horizontal velocity.  It starts to accelerate towards the lunar surface due to the moon's gravity, as shown by the increasing distance between the black dots at each time step.  When the thruster is turned on, the position dots change to red.  For simplicity, we assume in the code (below) that the thruster causes an acceleration of equal magnitude, but upwards. Hence the rate of descent decreases until the thruster is turned off, after which position is shown in blue.  By toggling the thruster, the lander can be brought gently to the surface.  This takes a little practice!

It would have been nice to retain a simple simulation loop like the first example, but include a 'key-pressed' check for thruster control.  That doesn't seem possible in this environment, so the code for this example uses an 'event-driven' programming model.  The position and velocity calculations reside in function 'tloop' and the thruster is called from a key-press handling function.











Friday, 17 November 2017

App and Applet for RF Link Budgets


Link budgets are used in radio communications to design parameters such as antenna sizes, bit rates, transmit power etc.  A quick link budget explains why Voyager 1 only transmits at ~ 160 bit/s, even though we use a 34m antenna on Earth to receive its signal!

About 10 years ago we wrote a java program for link budget calculations during an ISU workshop.  Here is a version of the code.  The tutorial we wrote for this application is still very relevant and discusses the effect of antenna gains, EIRP, bit rates etc.

Recently this software has been converted to an Android app.   Here are a couple of screen shots (from the initial release):

     

The LHS shows entry of some LB parameters for a cubesat downlink.  After completing all the boxes with blue lines, the "CALC" button produces outputs shown on the RHS.  The current app is fairly simple - it may be further developed at some stage. 

2nd Dec 2017: an updated version of this app now includes several sample link budgets, with a few comments on the parameters in each case.  These include two satellite links, one terrestrial application and a high-altitude balloon example.

Jan 2019:  Now updated to 0.5 on Google Play.   This version retains your LB parameters for re-use at a later time and includes some information on spectral efficiency.


Monday, 24 July 2017

Using FreeDV and SDR with ALSA Loopback

Introduction

FreeDV is a low bit-rate digital voice mode started by VK5DGR.  This software combines speech coding, error correction and modulation to digitally encode speech, generating a low bandwidth analog signal that is usually connected to a conventional amateur-radio transceiver.  How about using FreeDV with a SDR approach such as GNURadio plus a USRP (or similar device) - how should FreeDV be connected in this case?

Interfacing Options

One approach would use the FreeDV packages on the command line and simply pipe signals between software modules.  This can work very well (eg this example) but has the disadvantage that graphics and related GUI controls might be lost.

FreeDV running in a PC normally used 2 sound cards  - one for the mic/headphone connections and another for the low-bandwidth modulated signals.  GNURadio of course offers audio interfaces.  So we could imagine a PC with 3 sound cards, with the low-IF modulated signal from FreeDV output from SC#2 and then looped back into SC#3 as the GNURadio source. Apart from the extra hardware, that is not a good idea as the additional A/D and D/A operations would probably cause significant degradation.  But we can do the equivalent loopback operations within the PC using digital streams, as shown below.



Sample Implementation using ALSA Loopback

I thought it would be easy to setup the above on my Ubuntu 16.04, but it took a little longer than expected.  This approach uses the ALSA loopback device which is created by "sudo modprobe snd_aloop". You can see information about sound interfaces by using "aplay -L" or "arecord -L". This loopback contains multiple streams and to achieve the signal flow shown above, the loopbacks can be given names associated with specific card, device and subdevice numbers.  These names can then be used in FreeDV or GNURadio. (This page gives useful example information regarding ALSA device architecture.)

The loopback streams can be defined in the .asoundrc file.    I used the following:

# ALSA config in .asoundrc for freedv <> gnuradio audio streams
# We assume that the "Loopback" card exist and that for each
# subdevice, a signal input on device 1, comes out on device 2
# The mic and headphone interfaces are not included in this 
# file description - it should be possible to use standard names. 

# LB out #0 - to route freedv mod output to gnuradio input 
pcm.LB00 {
   type plug
   slave.pcm "hw:Loopback,0,0"
   }
# LB in #0
pcm.LB10 {
   type plug
   slave.pcm "hw:Loopback,1,0"
   }

# LB out #1 - to route gnuradio output to freedv mod input 
pcm.LB11 {
   type plug
   slave.pcm "hw:Loopback,1,1"
   }
pcm.LB01 {
   type plug
   slave.pcm "hw:Loopback,0,1"
   }

Let's assume you have FreeDV version 1.2 installed and running.  Use the "Audio Config" tool to setup connections shown in the figure below.   So select the LB01 device "from radio" under the Receive tab, and the LB10 device "to radio" under the Transmit tab.  Likewise the appropriate audio devices can be named in the grc setup.



This GNURadio diagram below shows a very simple audio loopback, with a variable amount of added noise. Notice that the audio source is setup with device name LB00 and the audio sink from GNURadio has device name LB11. During this simple test, with both FreeDV and GNURadio running,  the SNR can be varied and the effect observed in real-time on audio quality, sync etc

Of course for a real application the grc flow-graph below would be replaced by SSB transceiver processing, since that simply translates and interpolates the audio signal to a sampled signal suitable for the SDR interface.  I have a B200 USRP which I have used with GNURadio for initial SSB tests on 70 cm.  The next step will be to try my previous grc software with FreeDV.  (BTW GNURadio also includes a Codec2 module!)

Please Note
For reliable operation I had to stop PulseAudio while running the FreeDV/GNURadio test described above. Possibly there is a way of avoiding this.  Also note that "pulseaudio --kill" will respawn unless you adjust the default setting, e.g. with a .conf file that includes "autospawn=no".   It is probable that PulseAudio modules can be used instead of snd_aloop - I haven't explored that path.