Welcome to Richard Dobson's Home Page


cubeimg mitbook LHCsound Sound and Music Computing in Schools

Quick links:

CDP Multi-Channel Toolkit The Audio Programming Book
High Performance Audio Computing LHCsound software resources
Sliding Phase Vocoder Soundcard Attrition Page
streaming phase vocoder Surround, Ambisonic and multi-channel links
NOS-DREAM (Bath University) Virtual Electronic Poem (VEP)
Planetary Solfeggio Frequencies! Science Spaces - B-Format reverb impulses

I am professionally trained as a flute-player and composer, self-taught (with a lot of expert and patient help) as a programmer. For many years I worked  as a flute-maker and repairer. I still play the first flute I ever made. I also play other kinds of flute, especially the Indian bamboo flute (bansuri).

When not playing or teaching the flute, my main activity is in audio software development and research. I am a Core Developer for the Composers Desktop Project (CDP) system, with a focus on overall maintenance and compilation. The majority of the software was, and continues to be, written by Trevor Wishart. I was for many years a visiting research fellow at Bath University, in the department of Computer Science, where I worked with Prof. John Fitch on various research projects, some of which were actually funded by research grants.

Computer Music and

I have contributed code to Csound, along with many other developers.  I wrote the chapter "Designing legato instruments in Csound" for 'The Csound Book'.
 

Streaming Phase Vocoder

[back to TOP]

Resources for multi-channel soundfiles and surround-sound

There are two primary aspects to surround sound - the sounds themselves, and the container file format for exchange and distribution. The original standard audio file formats have for many years been Microsoft's WAVE format, and the AIFF format associated principally with Apple computers.  These formats are however ill-suited to surround sound as they do not support the definition of channel or speaker layouts. With Windows98 (SE), Microsoft introduced an update to WAVE in the form of the new WAVEFORMATEXTENSIBLE (WAVE_EX) file format.  It defines 18 named speaker positions supporting all the standard consumer layouts such as 5.1. However, it does not support the many regular speaker layouts that are recommended for B-Format reproduction. As with the WAVE format, it is also a streaming protocol used to configure recording and playback hardware, and is now standard for all Microsoft platforms. It also supports sample formats beyond 16bits. See the NOS-DREAM WAVE_EX page for some examples.

Ambisonic B-Format

This is one of the most versatile yet economical surround encoding systems around; at least until you start using lots of speakers. A B-Format stream can be decoded to a wide range of speaker layouts, including the ubiquitous but otherwise unambitious 5.1 cinema surround arangement. See the multi-channel links referenced above for more information.

I have defined a simple B-Format specialization of WAVE_EX that supports up to third-order B-Format streams (i.e. up to sixteen channels), and is described in detail here. It is now widely known and used as the AMB format. I have created a set of free command-line tools (Windows, OS X) for assembling, disasembling and playing multi-channel files, in the form of the CDP Multi-Channel Toolkit. It includes support for AMB files.

The Ambisonic community has for some years now been planning a more advanced file format that can support as wide as possible a range of Higher-Order B-Format streams, including the possibility of one of more forms of lossless compression. Anyone interested in this aspect is recommended to subscribe to the sursound mailing list, where this and many other aspects of surround sound are debated and explored by some of the world-leading specialists in the subject. I disagree with most people on the sursound list by considering that "plain old" first-order B-Format is still useful.

[back to TOP]

High Performance Audio Computing - the Sliding Phase Vocoder.

The relatively recent emergence of consumer-level workstations with multiple cores hides a long history within computer science of parallel and concurrent processing. The most familiar example of this is probably the computer "cluster" associated with modern supercomputers. Other technologies are also in widespread use, involving not four or eight cores on a CPU but hundreds, and even thousands. I am interested in how these techniques associated with classic High-Performance Computing (HPC) might be extended to support advanced audio processing. The challenges are formidable, especially with respect to latency (an issue that historically has been of little concern to computer scientists, who are grateful if a task previously taking a month to run can now be completed in a few days). Hence, HiPAC - "High-Performance Audio Computing". See the NOS-DREAM HiPAC page for more details.

This is a subject that should be of concern to all computer musicians - to say nothing of system manufacturers and software houses. Despite the manifest speed increases offered by modern multicore desktops, relatively little of this increase is easily available to real-time audio as so much of it is inherently serial in nature (e.g. recursive filters) and cannot be parallelised. On the other hand, many new processes hitherto disregarded, but which can be effectively be parallelised, can now be considered for real time implementation given suitable accelerator hardware, such as the now widely available GPU processors. One such new process, the Sliding Phase Vocoder, formed the basis of a recent research project at Bath University. Refer to that page for some downloadable sound examples.

[back to TOP]

The Audio Programming Book

I contributed a number of chapters, targetted primarily at readers with little or no prior programming experience. Each chapter is sound and music oriented - even the very first, which rather than printing the traditional "Hello World" message, instead prints "What a Wonderful World!".

Book Chapters

1. Programming in C. A progressive series of projects introducing core C idioms.

MIDI/frequency conversion Equal-tempered scales
Time/value breakpoint files (or "automation data") Introduction to Gnuplot
Exponential attack/decay envelopes Sinewave generation
Intro to binary soundfile representation Introduction to Audacity

2. Audio Programming in C. Extends the progression of Chapter 1. It introduces the specially created portsf soundfile library (WAVE, AIFF only), leading to a number of examples of audio synthesis and processing.

changing level normalization
breakpoint parameter control stereo panning
envelope extraction waveform synthesis and simple oscillators
additive synthesis for bandlimited waveforms table lookup oscillators

DVD Chapters

1. Audio Processing in C – Delay lines and Noise Generation

circular buffer impulse response feedback multi-tap methods
variable-length delays white, pink and red noise TPDF and dither variable-rate and interpolating RNG

2. Digital Filters

Basic types: core properties The vocabulary of filters Resistors, capacitors and inductors
The FilterResponse utility program FIR filters and convolution Poles and zeros, the biquad
Robert Bristow-Johnson's EQ Cookbook Testing techniques Subtractive synthesis and BUZZ

3. From C to C++

A mostly theory-oriented chapter, completing coverage of C, and looking towards Object-Oriented Programming.

Description of Csound's fast interpolating table lookup oscillator using bitwise operators

Summary of the C language. Demonstrates the linked list for chaining audio effects.

4. Audio programming in C++

"Thinking Objectively". Introduces core language elements of C++. Recasts previous C examples in C++.

5. Audio Plug-ins in C++

Threads, callback functions and interrupts Example framework – "mini-plugin"
A first VST plug-in example - subharmonic filter bank Debugging techniques - the log file

[back to TOP]

The LHCsound Project

LHCsound is a project to sonify (i.e. represent as sound) particle collision data from the ATLAS detector of the Large Hadron Collider (LHC) at CERN. An initial pilot project (January 2010) was funded as an outreach project by the Science and Technology Facilities Council (STFC). CDP (Archer Endrich and myself) were initially approached with a view to using the CDP system for creating sonifications. This initial collaboration resulted in the publication in summer 2010 of a wide range of sounds, together with an associated website. The work excited considerable interest online, from the broadcast and print media, and from composers.

A follow-up project in 2011, also funded by the STFC, extended the outreach theme by focussing on development of dedicated sonification software and associated support materials, to form the basis for direct outreach work in public workshops, and in schools. See my LHCsound Resources Page for the latest version of my LHCSynth application (Windows and OS X) and documentation.

[back to TOP]

For any enquiries regarding these projects, feel free to email me


Last updated: June 09 2020