Synclavier History Tour and The Quantum Potential With Christopher Currell
Its hard to deny, the advent of digital synthesizers changed the sound design landscape forever. None more than the computer based systems such as the New England Digital Synclavier of the early 1980s. With stiff competition from the Fairlight, the Synclavier II literally pushed the envelope, and subsequently changed the sound of popular music itself. With the new approach of integrating synthesis with composition. Artists were free to do things they never dreamed of before, in less time, and less space - no studio required.
Those were exciting times that we are very fortunate to get a first hand account of from Guitarist, Composer, Producer Christopher Currell. Who was one of the early adopters of the Synclavier II system that used it extensively well into the 2000s. Working with Walt Disney, major labels, many artists such as Micheal Jackson, and many large companies on music and sound technology. Christopher shares his experience with the legendary Synclavier II system, as well as his innovative work with 3D audio, and his current endeavor - The Quantum Potential.
After looking over your first EVENT HORIZON post. I am wondering what the atmosphere was like in 1980. Did people know what sampling was, and was there anything else shown at the AES, or were you aware of anything else besides the Synclavier?
The concept of sampling and then re-performing a sound was quite new in those days. The first sampling instrument that I was aware of was the Fairlight CMI. I was really impressed and I wanted one. The problem was the price! But at almost the same time, I also became aware of the Synclavier. I was able to compare instruments at AES.
At the time, the Synclavier did not have sampling yet, it was a synthesizer. But, the sound of it was magical and very hi fi. It's synthesis engine sounded better to me than the Fairlight. It also had more voices and the sequencer was more advanced.
It was sometime later that the Synclavier introduced it's sample to disk option. It was not polyphonic in realtime...only monophonic. It actually played a mono sample directly from hard disk...not ram.
But the sound quality was fantastic. It could sample at any sample rate up to 50 khz.
What was interesting, it incorporated a method that allowed the user to play a polyphonic sound into the sequencer, then locked to tape via SMPTE, it could do multiple passes to tape using a mono sample played from the hard disk. Each pass would strip off a voice and print it to tape automatically eliminating the previous pass. After a few passes, the entire polyphonic performance was recorded using the sampled sound. It was a bit tedious, but it worked well and the result was as if you had played the entire polyphonic performance in one pass.
Backing up a bit, a lot of people think of the Synclavier as just a sampler for playing dog barks and stuff. It was more than that from the beginning, with a full composition system? How was it presented at AES in 1980, was sampling demonstrated as well?
Yes...it could do much more than play samples of dogs barking! It was designed to compose right from the beginning. It got more and more advanced over time. I do not remember when I first saw the Synclavier at AES...maybe 1980. I do know it was the first time it was shown and it did not have sampling yet. It was basically an FM digital synthesizer. But the sound was amazing!
The DX7 had not been released till 1983. The Synclavier II's (SII) FM synthesis engine is very different from the Yamaha FM model. Can you describe how a FM patch on the SII was setup?
The Synclavier II FM architecture was fairly simple.
There were four separate synthesizer voices available at one time, called Partial Timbres. These could be assigned in any number in a patch from 1 to 4. So in a 32 voice system a patch with 4 Partial Timbres has 8 note polyphony, 2 Partial Timbres would have 16 note poly, and so on. It had a Chorus button that would double up and detune the 4 Partials for as many as 16 Partials per key, but still only 4 separate Partials, just mirrored and detuned.
Each Partial Timbre had it's own Volume Envelope, FM (Harmonic) envelope, 24 part Digital ~ tone generator, LFO (Vibrato), and Portamento section. The Digital tone generator is dialed up from a total of 24 sine wave harmonics to produce the additive waveforms. There was no envelope per harmonic, just the master Volume Envelope. (Resynthesis added this feature later along with 128 harmonics.) The Harmonic envelope controls the sustain and peak amount of FM from a sine wave, and frequency set by the FM ratio amount.
The Synth voices were 8 bit and there were two anti-aliasing hardware switches that the user could use in various combinations depending on the patch.They worked quite well.
The Arturia Synclavier V is a direct port of the original software code by Cameron Jones, the original programmer for the Synclavier. If anyone is interested in the Synclavier Digital FM synthesis, download the Arturia Synclavier V It sounds great!
Sounds like each partial was a 2-OP FM model with the FM envelope acting as another Oscillator. Was there amplitude control for the FM envelope? Could you modulate an additive wave as well as sine waves?
Well, I can explain a bit of the basics of how the sound generation section of the Synclavier II created its sound.
Contained inside the Synclavier computer were the oscillators and the digital to-analogue converters. They were a proprietary design. Each synthesizer 'voice' started with a pair of oscillators.
The oscillator pairs used a combination of additive and FM synthesis. One oscillator was the carrier. lt set up a sine wave over which 24 harmonics could be generated and balanced. The individual harmonics are called up individually or in groups from a section of the panel buttons and adjusted with the control knob.
As each is adjusted, its relative level showed up on the keyboard LED readout. The completed carrier wave was then given an envelope called the 'volume envelope'. It consisted of six parameters: delay, attack rate, peak level, initial decay rate, sustain level, and final decay rate, all of which were controlled with the knob and then displayed on the LED readout.
In fact, on the basic Synclavier II, the control knob and the LED readout were the only way of adjusting and examining the many synthesizer parameters. The last button pushed dictated which parameter was being addressed. Four single LEDs next to the readout indicate whether the units for the parameter chosen are Hz, ms, dB or an arbitrary scale.
The second oscillator in the pair is an FM modulator. It too started as a sine wave (and stayed that way), whose frequency is determined by a setting called 'ratio'. For example: a 1.000 Hz carrier modulated with a sine whose ratio is 0.1, would produce sidebands to the carrier 100 Hz apart.
The modulator could also be set to a constant frequency between 0.1 and 999 Hz, independent of the carrier frequency. This created non-harmonic tones, which had the potential to be much more interesting than the harmonic ones created by the additive synthesis process.
The modulator wave was then given its own six-parameter envelope called the 'harmonic envelope'. The higher the modulator wave's amplitude, the more sidebands were created (and the greater their level) which made for a 'denser' sound.
While the volume envelope controlled the overall shape of the sound, the harmonic envelope independently controlled its density so that a sound could change from dense to simple, or vise versa. and then back again as it plays.
The amount of FM could also be varied across the keyboard so that lower notes could be made to sound more brilliant with more sidebands, while higher notes became less "grainy".
I hope this answers your question.
That does answer the question, thanks. Listening to this demo made by Denny Yeager. I can hear that sound in a lot of the patches. The harmonics are somewhat animated, giving it that extra lush feeling. FM is really great for that, but had problems with aliasing and frequency fold-over. Were those issues on the Synclavier?
That demo disk was created very early in the Synclavier II development. The synthesis features increased radically over time. And, of course polyphonic sampling at 100kHz was added along with music printing and the Direct to Disk recording system.
The aliasing was generally not a problem with the Synclavier. As I mentioned before, there were two switches on the hardware that engaged anti-aliasing filters. These were very specific, deep cut filters that were in the exact frequency range as the aliasing noise. These effectively cut out any aliasing. Using these filters, depending on the patch, could be used as a kind of creative EQ. As an experiment, I tried to reconstruct those filters using existing software and hardware EQ at that time and I could not even get close to duplicating them.
The interesting thing about the Synclavier's digital synthesis was the amazing clarity, especially in the high end frequency range.
The down side was, the synth voices were only 8 bit. So the voices lacked in detail sometimes as well as the aliasing. The current Synclavier V by Arturia has a bit depth of 24 bits. The Bit Depth is user selectable starting from 4 bits and in 2 bit increments, goes up to 24 bits. The 8 bit setting is the same bit depth of the original Synclavier II. Some people really liked the 8 bit sound of the Synclavier II.
Impressive, I never imagined Frank Zappa's early Synclavier albums were 8-bit. Demonstrates that with good filtering, 8-bit is extremely powerful. Getting back to composition, with sounds shaping up. How did production with Synclavier change compared to the production gear you were working with before that?
Well, that is an interesting question. First of all, I am a guitar player. I was always interested in different sounds. I liked the expressiveness of the guitar but the range of sounds a guitar can create, to me, was limited. I heard all kinds of sounds in my head. It was a bit frustrating to not be able to have access to the sounds in my head. I tried using lots of guitar pedal devices...tape echo, distortion, chorus, flanging, wah and volume pedals, reverb etc. I had a lot of gadgets hooked up to my guitar. In those days, I was limited in guitar FX pedals. Today, there is a LOT of pedals to choose from. Of course, having so many devices, made my guitar noisy and the quality of the overall effected guitar sound dropped.
About that time, modular synthesizers started to appear. I was fascinated by these instruments but I could never afford one. But I listened to records and read as much about them as I could. I was really into Walter Carlos and Isao Tomita among others.
Then the Mini Moog came on the scene. It was affordable (compared to modular systems) and the sounds were cool. For me, it was still expensive and the fact that you had to play it with a keyboard was very frustrating. I was a guitar player!
Around this time, 360 systems started to experiment with pitch to CV/Gate conversion. This interface was called the Slave Driver. I did not have one of these. I heard that they glitched a lot. Plus, to have polyphony, you had to have a separate synth for each string! The preferred synths for this setup was the Oberheim SEM model cps-1 times six!
Anyway, when the Mini Moog came out, it became a standard in many keyboard player's set ups. Since I could not play keyboards, I did not try to get one of these either although I wanted to use the sounds it could produce.
I did finally get a Roland GR 300 analog guitar synth. This system worked well but it was limited in the types of sounds it could create. I still have that system. It is very good for the things it was designed to do.
I went from this directly to the Synclavier so it was a huge jump. I learned and studied the Synclavier obsessively. I started with the Synclavier II, expanded it to the new keyboard, then added sampling and eventually the Direct to Disk recording system. So my synth background before the Synclavier was very minimal and basically none. I went from rock guitar to the Synclavier. I did eventually get the digital guitar option which just happens to use the same guitar as the Roland GR 300 system. So I felt at home with that guitar.
During this transition, I eventually learned keyboards enough to produce my own music on this system. The word got out eventually and Michael Jackson called me to help him use his Synclavier.
Ah, so the SII replaced a lot of desperate pieces of gear. Would you say it is one of the first of what we know today as the Music Workstation?
The Synclavier II was a kind of workstation. It had a sophisticated sequencer which they called the memory recorder. so, i could create entire compositions in the Synclavier II. the problem was the amount of voices. You would run out of voices quickly. so I used the Synclavier II locked to Tape to do complicated compositions.
Were you using the system to compose with from the start, and did you go into it with audio production experience already?
I started right away using it as a composition, production tool. I already had professional engineering experience and production experience so moving to the Synclavier II as a bases for production was relatively easy although the recording and production techniques were a bit different. but soon, I found myself wanting to use real sounds as well as synth sounds. And I was always running out of voices. Syncing to tape was fine but it was an added step just to hear my compositions fully. then, as the song was built up, if i needed to change something that was already recorded, I would have to make the change and then re_record it. I did not really like this method of working. It was too slow. I even ran out of tracks on the recorder. But the sound was really good! Back in those days, all I had in my small studio was a Tascam 8 track and 2 track recorder. I was using a Tascam M30 mixer and some JBL 4311 speakers. I was using a Sound Workshop reverb. Around the same time, I upgraded (slightly) My Mixer to A Ramsa WS212 Mixer and got a couple of Yamaha SPX90 reverbs.
When syncing to tape, the Synclavier used SMPTE or something else?
Concerning synchronization, originally, there was a 50 khz tone or a beat pulse that could be printed to tape. Much later, SMPTE and other methods of synchronization were added.
I see how that can get tedious, even with a great system to work with. Were there options to add more voices or was that how everyone worked, even the big Studios?
Almost everything was an option on the Synclavier. When the Synclavier II became available, the voices, memory recorder memory and the terminal were all options. I think you could have an extra floppy disk drive as well.
Was the sampling option an upgrade as well, and how much did the upgrades cost, ballpark prices?
The sampling was an upgrade a short time after the Synclavier II was released. It was mono sampling up to 50 kHz. The price for the mono, 50 Khz Sample to Disk option was $7,500. I think the later stereo 100 Khz Sample to Memory option was $15,000. Hard drives were extra.
Yikes, that was a new car or two in 1980s money. How did you handle memory and storage?
There was basically two different memories in the Synclavier, RAM for samples and memory for the Memory Recorder. The configuration of my system had 32 Megs of RAM and was capable of 500,000 notes in the Memory Recorder. I never ran out of notes or events.
How did the sampling integrate into the system compared with the synthesizer patches?
The sampling voices appeared as voices along with the FM. There was no difference for accessing the sampling voices than the FM. The Synclavier treated them basically the same. It was the same on the memory recorder too. Sampled voices just appeared as a partial timber just like the FM.
Of course you could get deep into editing the samples through the terminal.
Aside from having enough voices, was the sample RAM for the initial recording and editing too?
The sample RAM was for samples only. So you had up to 32 megabytes for recording samples. The editing was done in RAM as well. That means you had to share sample RAM between different patches of samples. Drums did not take up much memory at all but strings, piano and other instruments with long sustains took a lot of memory. So that tended to limit how many different instruments could play at once from the memory recorder.
Were you limited by memory, or could you stream as many sounds as you wanted from hard disk?
I would have liked more RAM and later it was possible to add much more RAM. But for me, on my system, running out of voices was sometimes the main problem. My system had 32 sampled voices and 32 Synth voices. Later, you could expand the sample voices to 96. The Direct to Disk , as the name makes clear, played audio files from the hard drive but the sampling required voices as well as RAM.
Did the amount of RAM limit the length of samples you could make and what kind of tools did the SII offer for editing, and was it fast for that kind of work?
The Synclavier sample editing tools were the basic standard editing tools that we see today. You edited the samples on the terminal either in mono or stereo. You could play the files backwards and all those kinds of things. It had some good looping parameters that was very useful when trying to loop samples.
Seems that samples back then, had to be too big for floppies. How did you manage off loading them from the hard drive?
Concerning floppies...using them to save samples was in the early days of the Synclavier and the system was smaller and the sampling system was mono sampling to disk. There was no problem at all storing lots of synth patches on floppies. We did move up to hard drives...10 or 15 MEGABYTES! I remember when they came out with an 80 megabyte drive. I was in heaven! But we did have tape backup storage on a Kennedy tape drive. I think the tapes were 15 and 25 megabytes each. Of course later as the system evolved, we had bigger hard drives and optical drives with gigabytes of space.
It is an interesting memory exercise for me trying to remember these details from 25 to 30 years ago when I first got my first Synclavier!
Great, so much was done with so little memory and storage. At some point you started doing commercial production with the Synclavier. What did people want you to produce with it?
I had the Synclavier II for a few months and I upgraded it to the bigger keyboard. I happened to be interviewed on a educational TV program by the electronic music department of UCLA. I talked about the Synclavier. Very quickly, I got a call from the A&R guy from dance music department at Warner Bros records. He had seen the show and was impressed. He wanted me to produce an avant-garde music artist that was signed to the label. Things took off from there and I was creating and producing new sounding records for various artists.
With synthesis, sampling and production going on. Was there a community or network of other Synclavier owners to connect with, share ideas, patches, etc?
Yes, New England Digital, the Synclavier Company, had owners meetings where everyone got to know each other and discussed techniques, wish lists with the company. Patches were not exchanged too much. It took a lot of work to create some of the great sounds. Synclavier owners are careful not to just give their sounds away. I suppose also that, to some degree, we were all competitors, although each Synclavier owner had their own styles and areas of expertise which made each of us unique.
With your own system library ready to go. What were some of the first projects you used the SII for?
One of the first projects was doing a Disney Channel music scoring gig for a popular animal show called Bill Burrud's New Animal World. Soon after I was called by Warner Brothers to produce a dance record for Sire Records.
Disney and Dance records are interesting places to start. What kinds of things did the SII offer that you couldn't do otherwise?
Basically, I could work faster with the Synclavier which was necessary for the Bill Burrud shows. I could also do the entire shows from the Synclavier in my home studio.
The dance record, I was able to use new sounds and record with very high fidelity. I was able to do all the pre-production on the Synclavier in my home studio. I then went to the Warner Bros recording studio to transfer everything to multi-track tape.
Fascinating, so you were one of the first to work with what we know today as the modern DAW?
Yes...I suppose you could consider the Synclavier II as the predecessor of the modern DAW. It clearly became a very sophisticated workstation later in it's development. I always envisioned being able to do everything in the computer. Not only was control a factor but also convenience and integration of all aspects of creating and recording music. One person could do it all.
Working for others so much, was the SII portable? Did you have to take it into the studio to transfer the audio or did they have their own SII?
The original Synclavier II was portable. I took mine to various recording studios. Later, as the system became more powerful, it also became much larger. I tended to find a home in a room at various recording studios to make things easier. When I was doing Michael Jackson's "Bad" album, I moved my large Synclavier system to Michael's studio at his house and Michael's Synclavier was moved to Westlake studios where we recorded Bad. Michael and I would work at his house writing and recording song demos and then we transferred those to the Synclavier at Westlake when we were working on the album. That method worked very well. We worked this way everyday for a year.
With so much power under one's control, did you find yourself creating music that you could not before? Was experimentation something that you found yourself doing more or less?
Yes...my main motivation for having a Synclavier in the first place was to create music that I could not do with traditional means. That translated directly into Michael's projects. His only instructions to me were..." Make me unusual sounds" So there I was, just creating sounds and music that came naturally to me...AND I was getting paid for it!
That's a big jump from TV soundtracks to working with arguably the biggest recording artist of the day? Were album producers actively looking for new sounds to use on records, and what were some of the production ideas that were requested most?
The various other work that I ended up doing was rather interesting. Almost all the clients wanted me to do my own thing for their projects. I guess they thought my concepts and ability to create new sounds was a selling point for their products. In rare cases, where someone wanted me to just copy already existing styles, I just told them I did not do that sort of thing. There are lots of those types of musicians that can do those types of projects.
During and after working with Michael, lots of people wanted me to create his kind of sound for their projects. Since that "sound" from the "Bad" album was largely my natural style of music at that time, I was able to do other projects using that (my) style. For me, doing those projects was not a compromise at all.
What a great opportunity, to be you, for others. How were these new sounds auditioned and was there a formal process that followed for incorporating these sounds into the album tracks?
There is no rule on how sounds were auditioned. With Michael, I would make these " unusual sounds" but for them to make much sense in a pop format, I always put the sounds together in some kind of context...a groove or breakdown etc. I would put it on cassette (this was a long time ago!) and slide the tape under his bedroom door before I left his studio around midnight or later. I usually got a call around 2:00 am from him giving me comments or instructions.
With others, i would just incorporate my sounds into the project and present it as a demo for approval...or I would just do the project without having to go through any approval process.
Were there times when a new sound idea would inspire a new track or would you do your thing on works in progress?
As I said, there is no rule. I sometimes start a music idea with a a sound...or a particular rhythm pattern. Sometimes it was a melody. Sometimes even good lyrical content or poetry would create a mental image or scene in my mind. I would then write music to that scene. There are many different starting points for me when creating.
Sounds like a truly creative exchange, no wonder those albums got so much attention. Were there other applications for the SII, like fixing production mistakes, transposing parts, editing and other things that are common today?
There was so many things we could do with the Synclavier besides just using it for sound creation. For example...
When I was working on the "Bad" album, the Synclavier was first being used as a "band aid" This was mainly because Quincy Jones and Bruce Sweden did not really know how to use the Synclavier optimally. Such as paying attention to sync problems and using the Synclavier as the master clock for everything. I ended up sometimes putting other peoples drum machine sounds and sequences into the Synclavier to fix arrangements.
We used Quincy's MIDIed Hammond B3 that Jimmy Smith played on the album and captured the entire performance into the Synclavier so we could easily change sounds or edit later.
With vocals, Michael would just sing one harmony section and I sometimes just copied and pasted the parts for repetition or dropped them into other sections of the songs. In those days, doing that was very time consuming but with the Synclavier, it just took a few minutes.
I could take a live drum performance, and by using the Synclavier, create a unquantized click track that followed the drums exactly. I would tell the Synclavier to follow that click. I could then quantize new performances played into the Synclavier to that new "breathing" click track.
That kind of production tool must have been essential when his albums were a host to so many different artists. Besides Jimmy Smith, were there other keyboard / synthesizer players who you introduced to the SII?
For the most part, the Synclavier was over the heads of most musicians. By this I mean, the complexity, knowledge and understanding of the Synclavier and it's use. Price wise, it was also over the heads of many musicians! Almost every musician I met was interested in the Synclavier...at least to some degree.
Must have been like an alien encounter for many of them. What were the kinds of things other artists liked most about it?
I did have detailed discussions with Michael Boddicker about the Synclavier. He was interested in it to make money I think! Rory Kaplan, one of the other keyboardists with Michael bought one after seeing what I could do with it. He was also interested in making money with it. For me, my interest was always and primarily for creating new music.
Working with Michael Jackson, you used the SII in the studio and on the road as well. How was it used in a live situation that was different from the studio?
I only took the Synclavier on the road with Michael on the "Bad" tour. It was used to "enhance" the existing band instruments. Michael wanted the music in the live show to sound exactly like the "bad" album. So I used the Synclavier to "re-create" the album live. This included sequencing parts with a click sent to Ricky Lawson, our drummer. I played some parts on the Synclavier keyboard as well as using the SynthAxe guitar controller.
Wow, so you brought the whole system to re-create things on stage. Computers are fragile, was the SII road ready?
The Synclavier was built into a flight case so it was fairly protected. Mitch Marcoulier, my Synclavier tech did some modifications so the system would be more durable on the road. For example, he put filters on the cooling fans air intake system to keep the dirt out.
How was the system transported on the road and was there a back-up system in case something failed?
I played two systems on stage and of course we had an entire Synclavier as a backup. Also, all the Hard diskes were duplicated a as well. In case a hard drive would go down, Mitch could just plug another one in very quickly. Amazing enough, we never had any incident with the Synclaviers that caused any problem for the show!
TWO systems, wow! I guess you didn't have a choice. The SII wasn't something you could run to to the local music store for parts. What did you do when you needed replacement parts, if any?
We carried our own backup parts in case something broke or wore out. We could order something from the factory but that would take time to be sent to us...plus we were traveling and on the move all the time. it was a bit difficult to send parts to us.
Sounds like you were fully prepared, tech wise. How long did you tour with Michael and were there any discoveries along the way with the SII?
I was on the "Bad" tour for around 18 months I think. I was in the studio for a year working on the Bad album and I was working at his house for about 6 to 8 months working on the songs for Bad.
A year and half is a long time. Were there any situations you didn't anticipate using the SII for, or were there any problems that cropped up that it became a solution for?
The only real discovery about the Synclavier while touring was that if the Pyro explosions on stage were too big, the electromagnetic pulse from the explosions would crash one of the Synclaviers. So the pyro guys had to be careful on how big they made the explosions.
That's impressive, Most commercial gear wouldn't hold up that long. How were you setup on stage - Did you have the full rig running?
I assume you mean how was my equipment set up on the Michael Jackson gig. I was on stage right on a riser so I could see over the singers and dancers heads. I had to be able to see Michael because he would give me signals when he wanted to do certain things. I had two Synclaviers and a Direct to Disk system on stage. I used one Synclavier for my live performance sounds and a second one for sequencing. The second one was also attached to the Direct to Disk system. The systems were always running...doing something....or many things...in each song.
What were you using that weird looking guitar thing for?
The "weird looking guitar thing" you mentioned was a MIDI controller called the Synth Axe. Since I was mainly a guitar player, I used that instrument to play most of my parts. It allowed me to transfer my guitar playing technique to play the Synclaviers. I did play keyboards a little as well.
Speaking of new controllers. By 1988, there was a lot of new synths and sampling gear coming out. During or after the BAD tour, had you made upgrades to the SII?
After the "Bad" tour, the only upgrades to the already full blown Synclaviers was occasional software updates. New England Digital did not really care about what other companies were doing. They had the most advanced music creation system in the world...other companies could not compete on the level of the Synclavier at that time. I only felt the need to look elsewhere a few years later when New England Digital went out of business.
I do remember them marketing it as the most advanced system. Yet things didn't last, what happened to NED near the end and how did you react?
I think New England Digital went out of business in 1993. It got bought out from the bank by various entities trying to keep the company going. Ultimately, these attempts failed and the company was no more. In the meantime, fortunately, I saw the end coming and started to look for a replacement which was not easy. I kept my Synclavier up to about eight years ago when I sold it prior to moving to Japan. Meanwhile, I was building a system to replace the Synclavier.
Before they went out in 1993, there was a third generation Synclavier Digital Music System. Was that much different from the II, functionally?
The Synclavier Digital Music System evolved into the Synclavier Digital Audio System with the addition of the Direct to Disk recording capability. Of course there were many advancements in all areas of the Synclaver....too many to mention here.
Are all of the II and final Synclavier systems captured in the Arturia Synclavier V?
The Arturia Synclavier V is an exact duplicate (same code) as the original Synclavier II. The Arturia Synclavier V is just the digital synthesizers minus the sequencer. It has been improved with added features but the sound is exactly the same. Having said that, the sound will vary depending on the D to A converters used.
After an 18 month tour that went so well with the Synclavier II. What next, were you still working for Michael?
After the tour was finished, I went back to work for a while with Michael. We started work on the Dangerous album. It was during this work that I decided to leave. I worked nearly four years with Michael and I decided I really needed to move on and explore other musical realms.
At what point did you move on from the Synclavier system, and do you recall what it was that replaced it?
Ultimately, I went with a Mac computer, external hardware and plugins as the way to proceed for my personal music directions.
I am currently running a 2013 Mac Pro ( the trash can looking computer) which has everything apple made in it. Twelve processors, 64 gigs of RAM, one terabyte of PCIe-based flash storage with two AMD FirePro D700 Graphic processors. Attached to this I have an LG 38" curved 4k display with about 26 Terabytes of external hard disk storage.
I am running about 600 plus VST and AU plugins on this system. I am using a Metric Halo ULN-8 audio interface to the system. I can create audio mixes up to 192KHz. I have a Native Instruments Komplete Kontrol S88 keyboard and Maschine Jam MIDI controllers. I use a CME XKey Air wireless MIDI keyboard for auditioning sounds. I am using an iConnect MIDI 4Plus MIDI interface. Along with this setup, I also have a lot of guitar gear as well.
My studio monitor system consists of a KRK surround speaker system and Mackie HR824 stereo speakers.
My current work involves processing everything into high definition 3-D so I am using a very high end headphone monitoring system consisting of Stax SR-009 Ear Speakers, a Head Amp Blue Hawaii Special Edition amplifier and an Antelope Zodiac DAC.
What a shame about NED, but I had no idea you kept working with the SII up until just 8 years ago. You used the Mac with the SII via MIDI, I guess?
Actually, the Mac computer was connected via SCSI. The Mac, until recently was used only as a terminal emulation. The entire code was eventually rewritten to actually run on a Mac. Of course to interface with other musical gear, MIDI was used too.
By the way, the Synclavier II was radically upgraded way before I started working with Michael. This upgrade included the bigger, velocity and pressure sensitive keyboard. The system was no longer called the Synclavier II. It was called the Synclavier Digital Music System.
What kept you going with the SII - that you didn't find in other commercial gear?
There was two very good reasons I kept using the Synclavier for so long...the sound was amazing and the user interface was very intuitive and very quick for making music. Of course I knew the system very well so that was another reason I kept using it.
Going back to your work after working for Michael Jackson. It is four years later, what did you move onto after working for him?
Soon after leaving, I began working with multi-media and virtual reality technologies. I even wrote a three volume book on the future of VR. I started working a lot in Japan. I was introduced to many companies doing VR related work.
It was during this period that I got heavily involved in three-dimensional audio. I built a computer system that consisted of 4 channels of 3D audio processing. It was based on interactive head related transfer functions (HRTF). I used four Quantec QRS-XLR room simulators for creating the virtual room spaces. I spent about a half a million dollars on this development. I called the system the Virtual Audio Processing System (VAPS). VAPS itself cost three hundred thousand dollars!
This was an interesting period because I was working with many companies in Japan doing projects that involved virtual acoustic environments. Among them was Nintendo, Sony, Sega, NHK and Dentsu. I was working a lot with theme parks, TV and video games.
It was during this time that I discovered the reason how sound actually effects the human being.
Everyone knows that sound can change people's emotions, make their body move, create mental image pictures etc. but know one knows why. I discovered the actual reason. I called this phenomenon the "Currell Effect". It is the short form of the actual name which is "Harmonic Resonance of the Quantum Potental". It is also the name of the book I wrote on the subject.
For the past five years, I have been giving Currell Effect sound sessions in Japan. These sessions are designed, by harmonic resconence, to connect an individual to the Quantum Potential. In other words, it can exteriorize an individual and allow him/her to travel anywhere in the universe.
It is amazing that you tapped into VR so long ago. What year was that, and what kinds of applications where you using it for, then?
Concerning VR...I got involved with VR mainly because of my interest in virtual acoustic environments. I have always been a visual guy too so it was a very natural and quick progression for me. I always thought that, if used correctly, VR could change the way things are done on this planet. One of the main areas was education. I still think that advanced VR can accelerate the learning process greatly.
I started a project in Japan with a think tank called Technova. They were owned by the Toyota company. My idea was to put together the best technical and creative people to create an advanced VR system that could be used for telepresence. The goal was to make the virtual reality experience indistinguishable from reality.
That kind of experience (VR) would be great for education with being closer to the real thing. Speaking of which, I am fascinated with how sound works in nature - literally 3D, all around us - NOT just from the left and right. Unfortunately, it appears entertainment value is what sells and why it is only a part of gaming right now. What do you think is needed to move 3D sound forwards?
To move this 3D sound technology forward, there are many things that need to be done. One is the flexibility of the tools required to process sound into 3D. These tools need to be improved from the user interface point of view. They need to take into consideration the speed of the work flow required in commercial applications. Also there needs to be some standards in the use of HRTF measurements so there is more uniform comparability both in the processing stages and in the listening environments. For example, some HRTFs will not work well in mono and some do. The technologies to create a very realistic immersed audio experience have come a long way and are now just becoming acceptable in the quality and accuracy department. But one of the main barriers in the industry...not so much for consumers...is getting rid of old and fixed ideas about binaural and spatial audio in general. In the past the technologies were marketed as being amazing when they were not. So the industry has been lead down paths that just ended in a dead end. So industry engineers became very jaded and skeptical. That is slowly changing but not fast enough.
Even with the compatibility issues you describe. For better and worse, games and even movies are featuring more 3D audio. How does today's 3D audio compare with what you were working with VR back in the 90s?
The advancements in 3D sound has not been as quick as I thought. The work I was doing 25 years ago was quite advanced even for today. The controlling software and computer power is what has advanced the most. But I still consider it not to be what it should. Even 25 years ago, I was interested in designing the software to be very similar to a 3D graphic program. Where you can design the rooms in 3D, complete with acoustic texture mapping on the furniture and objects in the room.
i was also doing advance 3D acoustic research having to do with blind people. My idea was to use ultra sound bouncing off objects in the environment. Before these high frequency waves return to the ear, the ultra sound is converted into directionalized acoustic waves...very similar to how bats and dolphins locate themselves. These directionalized acoustic waves then are processed using a kind of particle generator to turn them into various sound shades and shapes which resemble a kind of reverb ambiance...also in 3D. The blind person, when hearing these directionalized colors and shapes of audio, would then be able to navigate his body in his environment easily.
This type of system would be very accurate and, for example, enable a blind person to distinguish the location of the silverware at the dinner table. It would be so accurate that the blind person would be able to distinguish not only the silverware's location but also be able to distinguish the difference between a knife and a fork.
Dolphin's ultra sound mechanism is so accurate, they can tell the difference between a dime and a nickel at the bottom of a pool. We can use a similar system using 3D sound for blind people too! Still today, such advanced use of 3D audio has not yet been realized.
But all my research in 3D sound was just preliminary background research which has lead me to what I am doing today. I am using 3D sound mixed with other advanced techniques to create a sound session that can exteriorize the individual from his body and enable him to travel anywhere in the universe! It is called the "Harmonic Resonance of the Quantum Potential". In easier terms...The "Currell Effect".
The "Currell Effect" really appeals to me as a sound designer. Yet, may be abstract to others who don't think of sound in terms of resolution, like we do video. How could this aural seeing, if that is the right description, be applied to an audio listening experience?
Concerning the visual aspect of audio listening...I think for most people, when they hear music, they get some kind of visual impression. For me, this phenomenon is very pronounced and my training has developed it so I can use it for creative purposes when composing. I get visual images from language, music and any sound in general. When it comes to 3D sound, there is literally another dimension that is recorded. This extra dimension stimulates the brain in a radically different way than normal 2-dimensional recorded sound. Of course, sound we here in nature is 3D and that is why we have a much stronger emotional response to audio stimuli in the environment than we do in a recording. For example, the emotional impact of a live performance is much stronger than the same recorded version.
AMAZING! Another dimension based on 3D sound, is a concept I never considered before. Can you explain in simple terms how it works - is it with music or other aural stimulation?
Concerning the Currell Effect....what is it? It is the use of specially processed audio to create a specific phenomenon of exteriorizing the spiritual being from the body.
This is done primarily by the use of very simple and well known wave physics called constructive and destructive wave interference. CE uses constructive wave interference. Basically constructive wave interference is when two identical wave patterns are brought together, their energy output doubles.
Everything is a vibration. All the thoughts and visual impressions in the mind are wave vibrations (electrical) many of these thoughts are too weak to be perceived by consciousness or only bits and pieces are reveled and not the entirety of the wave pattern itself.
By using specialized audio noise, which consists of white, pink, brown and grey noise...all processed in 3 dimensions, we create a potential that the mind can interact with. The noise, being a collection of all frequencies, has the effect of what is called Stochastic Resonance. Stochastic Resonance is a phenomenon where a signal that is normally too weak to be detected by a sensor, can be boosted by adding white noise to the signal.
But CE goes beyond Stochastic Resonance, by further enhancing the Stochastic noise field, we actually create a higher order of resonance called Chaotic Resonance. Chaos theory basically states that there is hidden patterning in the noise but is too complicated for normal perception to perceive. I do not add any hidden patterning. The noise essentially represents all possible patterns that can be generated by the mind. So the hidden patterning is actually created by the listener. Simply stated, this means that any pattern that is in the mind can find it's duplicate in the chaos.
What is interesting is a pattern in the mind may be too weak to be perceived. The same pattern in the chaos is too complicated to be perceived. But when both patterns resonate together, we get constructive wave interference which, generates more energy. The added boost in energy of a specific pattern then renders the pattern visible to consciousness.
But then something amazing happens, the mind acts as a converter and actually modulates this boosted pattern into the electromagnetic spectrum via harmonics and octaves of the wave pattern. Now, the pattern is basically light which then interacts with another resonant field. This interaction is called the Harmonic Resonance of the Quantum Potential.
The Quantum Potential Is a special attribute of the active vacuum or hyperspace. The quantum potential can connect widely separated systems by instantaneous effects, as if the systems were not separated but were located together as a single system. Further, the quantum potential does not have a single localized source.
This means that consciousness can be anywhere in the universe instantly. This is how a person can travel outside his body to anywhere in the universe...to any time, or any dimension.
As you can see, this is advanced technology. I have written a book on the subject called the "Currell Effect Harmonic Resonance of the quantum Potential". Unfortunately it is only currently available in Japanese but the English version will become available soon.
In the meantime, for those that are interested, here is a link to an article in Headphone Guru on the basics of the Currell Effect in English:
THE EVENT HORIZON – “MANIFESTING CREATIVITY” – PART 8
I have a dozen questions about Chaotic Resonance, and more of what you just described, Chris. But we'll have to save that for another interview. In the meantime, I will be checking out that article and grabbing the book to learn more.
I want to thank Christopher Currell for taking the time to share his experience with the Synclavier II music system and his on-going work with the many applications of sound. You are encouraged to investigate his work further via the following links:
Synclavier, Music and Michael Jackson
Virtual Audio 3-D Audio
Synclavier Software Development