Table Of ContentThe Sonification Handbook
Edited by
Thomas Hermann, Andy Hunt, John G. Neuhoff
LogosPublishingHouse,Berlin,Germany
ISBN978-3-8325-2819-5
2011,586pages
Online: http://sonification.de/handbook
Order: http://www.logos-verlag.com
Reference:
Hermann,T.,Hunt,A.,Neuhoff,J.G.,editors(2011). The
Sonification Handbook. Logos Publishing House, Berlin,
Germany.
Chapter 10
Laboratory Methods for Experimental Sonification
Till Bovermann, Julian Rohrhuber and Alberto de Campo
Thischapterelaboratesonsonificationasanexperimentalmethodbyargumentingthatsonification
methodsneedtoincrementallymergeintothespecificculturesofresearch,includinglearning,drafting,
handling of complexity, and communicatiing within and between multiple communities. The place
wheresuchaconvergencemaybefoundmaybecalledasonificationlaboratory.
Reference:
Bovermann,T.,Rohrhuber,J.,anddeCampo,A.(2011). Laboratorymethodsforexperimentalsonification. In
Hermann,T.,Hunt,A.,Neuhoff,J.G.,editors,TheSonificationHandbook,chapter10,pages237–272. Logos
PublishingHouse,Berlin,Germany.
Mediaexamples: http://sonification.de/handbook/chapters/chapter10
20
Chapter 10
Laboratory Methods for
Experimental Sonification
Till Bovermann, Julian Rohrhuber and Alberto de Campo
Thischapterelaboratesonsonificationasanexperimentalmethod. Itisbasedonthepremise
thatthereisnosuchthingasunconditionalinsight,noisolateddiscoveryorinvention;all
researchdependsonmethods. Theunderstandingoftheircorrectfunctioningdependson
the context. Sonification as a relatively new ensemble of methods therefore requires the
re-thinkingandre-learningofcommonlyembracedunderstandings;aprocessthatrequires
muchexperimentation.
Whoeverhastriedtounderstandsomethingthroughsoundknowsthatitopensupamaze
fullofbothhappyandunhappysurprises. Fornavigatingthislabyrinth,itisnotsufficient
toaskforthemosteffectivetoolstoprocessdataandoutputappropriatesoundsthrough
loudspeakers. Rather,sonificationmethodsneedtoincrementallymergeintothespecific
culturesofresearch,includinglearning,drafting,handlingofcomplexity,andlastbutnot
leastthecommunicationwithinandbetweenmultiplecommunities. Sonificationcanbea
greatcomplementforcreatingmultimodalapproachestointeractiverepresentationofdata,
modelsandprocesses,especiallyincontextswherephenomenaareatstakethatunfoldin
time, and where observation of parallel streams of events is desirable. The place where
suchaconvergencemaybefoundmaybecalledasonificationlaboratory,andthischapter
discussessomeaspectsofitsworkings.
Tobeginwith,whatarethegeneralrequirementsofsuchaworkingenvironment? Asonifi-
cationlaboratorymustbeflexibleenoughtoallowforthedevelopmentofnewexperimental
methodsforunderstandingphenomenathroughsound. Italsomustbeapointofconvergence
betweendifferentmethods, mindsets, andproblemdomains. Fortunately, todaythe core
ofsuchalaboratoryisacomputer,andinmostcasesits‘experimentalequipment’isnot
hardwaretobedeliveredbyheavydutyvehicles,butissoftwarewhichcanbedownloaded
fromonlineresources. Thisisconvenientandflexible,butalsoaburden. Itmeansthatthe
divisionoflaborbetweenthedevelopmentoftools,experiments,andtheorycannotbetaken
forgranted,andagivensonificationtoolsetcannotbe‘applied’withoutfurtherknowledge;
238 Bovermann,Rohrhuber,deCampo
withinresearch,thereisnosuchthingas‘appliedsonification’,asopposedto‘theoretical
sonification’. Participantsinsonificationprojectsneedtoacquiresomefamiliaritywithboth
therelevantdisciplineandthemethodsofauditorydisplay. Onlyonceasuitablesolutionis
foundandhassettledintoregularusage,thesecomplicationsdisappearintothebackground,
likethemedicaldisplayofapatient’shealthypulse. Beforethismoment,bothmethodand
knowledgedependoneachotherliketheproverbialchickenandegg. Becauseprogramming
isanessential,butalsosometimesintractable,partofdevelopingsonifications,thischapteris
dedicatedtothesoftwaredevelopmentaspectofsonificationlaboratorywork. Itbeginswith
anindicationofsomecommonpitfallsandmisconceptions. Anumberofsonificationtoolkits
are discussed, together with music programming environments which can be useful for
sonificationresearch. Thebasicsofprogrammingareintroducedwithonesuchprogramming
language,SuperCollider. Somebasicsonificationdesignissuesarediscussedinmoredetail,
namelytherelationshipbetweentime,orderandsequence,andthatbetweenmappingand
perception. Finally, fourmorecomplexcasesofsonificationdesignsareshown–vector
spaces,trees,graphs,andalgorithms–whichmaybehelpfulinthedevelopmentprocess.
Inordertoallowbothdemonstrationanddiscussionofcomplexandinterestingcases,rather
thancomparingtrivialexamplesbetweenplatforms,theexamplesareprovidedinasingle
computerlanguage.Intext-basedlanguages,theprogramcodealsoservesasprecisereadable
documentationofthealgorithmsandtheintentionsbehindthem[17]. Theexamplesgiven
canthereforebeimplementedinotherlanguages.
10.1 Programming as an interface between theory and
laboratory practice
Thereisgeneralagreementinthesonificationcommunitythatthedevelopmentofsonification
methodsrequiresthecrossingofdisciplinaryboundaries.Justastheappropriateinterpretation
ofvisualizeddatarequirestrainingandtheoreticalbackgroundabouttheresearchquestions
underconsideration,sodoestheinterpretationofanauditorydisplay. Thereareveryfew
cases where sonification can just be applied as a standard tool without adaptation and
understandingofitsinnerworkings.
More knowledge, however, is required for productive work. This knowledge forms an
intermediatestage,combiningknow-howandknow-why. Aslaboratorystudieshaveshown,
thecalibrationanddevelopmentofnewmeansofdisplaytakeupbyfarthemostworkin
scientificresearch[24]. Bothforartsandsciences,theconceptualre-thinkingofmethods
andproceduresisaconstantactivity. Acomputerlanguagegearedtowardssoundsynthesis
isaperfectmediumforthiskindofexperimentation,asitcanspanthefullscopefromthe
developmentfromfirstexperimentstodeeperinvestigations. Itallowsustounderstandthe
non-trivialtranslationsbetweendata,theory,andperception,andpermitsawiderepistemic
context(suchaspsychoacoustics,signalprocessing,andaesthetics)tobetakenintoaccount.
Moreover,programminglanguagesholdsuchknowledgeinanoperativeform.
As algorithms are designed to specify processes, they dwell at the intersection between
laboratory equipment and theory, as boundary objects that allow experimentation with
different representation strategies. Some of what needs to be known in order to actively
engage in the development and application of sonification methods is discussed in the
subsequentsectionsintheformofgeneralizedcasestudies.
LaboratoryMethodsforExperimentalSonification 239
10.1.1 Pitfallsandmisconceptions
Forclarification,thissectiondiscussessomecommonpitfallsandmisconceptionsmiscon-
ceptions that tend to surface in a sonification laboratory environment. Each section title
describesamisunderstanding,whichisthendisentangledinthesectionwhichfollows:
»Dataisanimmediategiven« Today,measuredanddigitizeddataappearsasoneofthe
rocksuponwhichscienceisbuilt,bothforitsabundanceanditsapparentsolidity. A
workingscientistwillhowevertendtoemphasizethechallengeoffindingappropriate
datamaterial,andwill,whereverrequired,doubtitsrelevance. Insonification,one
oftheclearestindicationsofthetentativecharacterofdataistheamountofworking
hoursthatgoesintoreadingthefileformatsinwhichthedataisencoded,andfinding
appropriaterepresentationsforthem,i.e.,datastructuresthatmakethedataaccessible
inmeaningfulways. Inordertodothis,aworkingunderstandingofthedomainis
indispensable.
»Sonificationcanonlybeappliedtodata.« Oftensonificationistreatedasifitwere
amethodappliedtodataonly. However,sonificationisjustasmuchrelevantforthe
understandingofprocessesandtheirchanginginnerstate,modelsofsuchprocesses,
andalgorithmsingeneral. Sonificationmayhelptoperceptualizechangesofstatesas
wellasunknownsandbackgroundassumptions. UsingtheterminologybytheGerman
historianofscienceRheinberger[24], wecansaythatitisthedistinctionbetween
technicalthings(thoseeffectsandfactswhichweknowaboutandwhichformthe
methodologicalbackgroundoftheinvestigation)andepistemicthings(thosethings
which are the partly unknown objects of investigation) that makes up the essence
of any research. In the course of experimentation, as we clarify the initially fuzzy
understandingofwhattheobjectofinterestisexactlythenotionofwhatdoesordoes
not belong to the object to be sonified can change dramatically. To merely "apply
sonificationtodata"withouttakingintoaccountwhatitrepresentswouldmeanto
assume this process to be completed already. Thus, many other sources than the
commonstaticnumericaldatacanbeinterestingobjectsforsonificationresearch.
»Sonificationprovidesintuitiveanddirectaccess.« Tounderstandsomethingnot
yet known requires bringing the right aspects to attention: theoretical or formal
reasoning,experimentalwork,informalconversation,andmethodsofdisplay,such
asdiagrams,photographictraces,orsonification. Itisverycommontoassumethat
acoustic or visual displays provide us somehow with more immediate or intuitive
access to the object of research. This is a common pitfall: every sonification (just
likeanimage)maybereadinverydifferentways,requiresacquaintancewithboth
the represented domain and its representation conventions, and implies theoretical
assumptionsinallfieldsinvolved(i.e.,theresearchdomain,acoustics,sonification,
interactiondesign,andcomputerscience). Thispitfallcanbeavoidedbynottaking
acousticinsightforgranted. Thesonificationlaboratoryneedstoallowustogradually
learntolistenforspecificaspectsofthesoundandtojudgetheminrelationtotheir
origintogetherwiththesonificationmethod. Insuchaprocess,intuitionchanges,and
understandingofthedataunderexplorationisgainedindirectly.
»Data"time"andsonificationtimearethesame.« Deciding which sound events
ofasonificationhappenclosetogetherintimeisthemostfundamentaldesigndecision:
240 Bovermann,Rohrhuber,deCampo
temporalproximityisthestrongestcueforperceptualgrouping(seesection10.4.1).
Bystickingtoaseeminglycompellingorder(datatimemustbemappedtosonification
time),onelosestheheuristicflexibilityofreallyexperimentingwithorderingswhich
mayseemmorefar-fetched,butmayactuallyrevealunexpectedphenomena. Itcan
behelpfultomakethedifferencebetweensonificationtimeanddomaintimeexplicit;
onewaytodothisformallyistouseasonificationvariable˚tasopposedtot. Fora
discussionofsonificationvariables,seesection10.4.5.
»Sounddesignissecondary,mappingsarearbitrary.« For details to emerge in
sonifications, perceptual salience of the acoustic phenomena of interest is essen-
tialanddependscriticallyonpsychoacousticallywell-informeddesign. Furthermore,
perceptionissensitivetodomainspecificmeanings,sofindingconvincingmetaphors
cansubstantiallyincreaseaccessibility.StephenBarrass’earbenders[2]providemany
interestingexamples. Finally,"aestheticintentions"canbeasourceofproblems. If
one assumes that listeners will prefer hearing traditional musical instruments over
more abstract sounds, then pitch differences will likely sound "wrong" rather than
interesting. If one then designs the sonifications to be more "music-like" (e.g., by
quantizing pitches to the tempered scale and rhythms to a regular grid), one loses
essentialdetails,introducespotentiallymisleadingartefacts,andwilllikelystillnot
end up with something that is worthwhile music. It seems more advisable here to
createopportunitiesforpracticingmoreopen-mindedlistening,whichmaybeboth
epistemicallyandaestheticallyrewardingonceonebeginstoreadthesonification’s
detailsfluently.
10.2 Overview of languages and systems
Thehistoryofsonificationisalsoahistoryoflaboratorypractice. Infact,withintheresearch
community,anumberofsonificationsystemshavebeenimplementedanddescribedsince
the 1980s. They all differ in scope of features and limitations, as they were designed as
laboratoryequipment,intendedfordifferentspecializedcontexts. Thesesoftwaresystems
shouldbetakenasintegralpartoftheamalgamofexperimentalandthoughtprocesses,as
"reified theories" (a term coined by Bachelard [1]), or rather as a complex mix between
observables, documents, practices, and conventions [14, p. 18]. Some systems are now
historic,meaningtheyrunonoperatingsystemsthatarenowobsolete,whileothersarein
currentuse, andthusaliveandwell; mostofthemaretoolkitsmeantforintegrationinto
other(usuallyvisualization)applications. Fewarereallyopenandeasilyextensible;some
arespecializedforveryparticulartypesofdatasets.
Thefollowingsectionslookatdedicatedtoolkitsforsonification,thenfocusonmaturesound
andmusicprogrammingenvironments,astheyhaveturnedouttobeveryusefulplatforms
forfluidexperimentationwithsonificationdesignalternatives.
LaboratoryMethodsforExperimentalSonification 241
10.2.1 Dedicatedtoolkitsforsonification
xSonifyhasbeendevelopedatNASA[7];itisbasedonJava,andrunsasawebservice1.
It aims at making space physics data more easily accessible to visually impaired people.
Consideringthatitrequiresdatatobeinaspecialformat,andthatitonlyfeaturesrather
simplisticsonificationapproaches(herecalled‘modi’),itwilllikelyonlybeusedtoplay
backNASA-prepareddataandsonificationdesigns.
TheSonificationSandbox[31]hasintentionallylimitedrange,butitcoversthatrangewell:
BeingwritteninJava,itiscross-platform;itgeneratesMIDIoutpute.g.,tobefedintoany
GeneralMIDIsynth(suchastheinternalsynthonmanysoundcards). Onecanimportdata
fromCSVtextfiles,andviewthesewithvisualgraphs;amappingeditorletsuserschoose
whichdatadimensiontomaptowhichsoundparameter: Timbre(musicalinstruments),pitch
(chromaticbydefault),amplitude,and(stereo)panning. Onecanselecttohearanauditory
referencegrid(clicks)ascontext. Itisveryusefulforlearningbasicconceptsofparameter
mapping sonification with simple data, and it may be sufficient for some auditory graph
applications. Developmentisstillcontinuing,asthereleaseofversion6(andlatersmall
updates)in2010shows.
SandraPauletto’stoolkitforSonification[21]isbasedonPureDataandhasbeenusedfor
severalapplicationdomains: ElectromyographydataforPhysiotherapy[22],helicopterflight
data,andothers. Whileitsupportssomedatatypeswell,adaptingitfornewdataisslow,
mainly because PureData is not a general-purpose programming language where reader
classesfordatafilesareeasiertowrite.
SonifYer[27]isastandaloneapplicationforOSX,aswellasaforumrunbythesonification
researchgroupatBerneUniversityoftheArts2. Indevelopmentforseveralyearsnow,it
supportssonificationofEEG,fMRI,andseismologicaldata,allwithelaborateuserinterfaces.
Assoundalgorithms,itprovidesaudificationandFM-basedparametermapping;userscan
tweakthesettingsofthese,applyEQ,andcreaterecordingsofthesonificationscreatedfor
theirdataofinterest.
SoniPy is a recent and quite ambitious project, written in the Python language [33]. Its
initialdevelopmentpushin2007lookedverypromising,andittakesaverycomprehensive
approachatalltheelementstheauthorsconsidernecessaryforasonificationprogramming
environment. Itisanopensourceprojectandishostedatsourceforge3,andmaywellevolve
intoapowerfulandinterestingsonificationsystem.
All these toolkits and applications are limited in different ways, based on resources for
developmentavailabletotheircreators,andtheapplicationsenvisionedforthem. Theytend
todowellwhattheywereintendedfor,andallowusersquickaccesstoexperimentingwith
existingsonificationdesignswithlittlelearningeffort.
Whilelearningmusicandsoundprogrammingenvironmentswillrequiremoreeffort,espe-
ciallyfromuserswithlittleexperienceindoingcreativeworkwithsoundandprogramming,
theyalreadyproviderichandefficientpossibilitiesforsoundsynthesis,spatialization,real-
timecontrol,anduserinteraction. Suchsystemscanbecomeextremelyversatiletoolsfor
thesonificationlaboratorycontextbyaddingwhatisnecessaryforaccesstothedataandits
1http://spdf.gsfc.nasa.gov/research/sonification
2http://sonifyer.org/
3http://sourceforge.net/projects/sonipy
242 Bovermann,Rohrhuber,deCampo
domain. Toprovidesomemorebackground,anoverviewofthethreemainfamiliesofmusic
programmingenvironmentsfollows.
10.2.2 Musicandsoundprogrammingenvironments
Computer Music researchers have been developing a rich variety of tools and languages
forcreatingsoundandmusicstructuresandprocessessincethe1950s. Currentmusicand
soundprogrammingenvironmentsoffermanyfeaturesthataredirectlyusefulforsonification
purposesaswell. Mainly,threebigfamiliesofprogramshaveevolved,andmostothermusic
programmingsystemsareconceptuallysimilartooneofthem.
Offlinesynthesis: MusicNtoCSound
MusicNlanguagesoriginatedin1957/58fromtheMusicIprogramdevelopedatBellLabs
by Max Mathews and others. Music IV [18] already featured many central concepts in
computermusiclanguagessuchastheideaofaUnitGenerator(UGen)asthebuildingblock
foraudioprocesses(unitgeneratorscanbe,forexample,oscillators,noises,filters,delay
lines,orenvelopes). Asthefirstwidelyusedincarnation,MusicVwaswritteninFORTRAN
andwasthusrelativelyeasytoporttonewcomputerarchitectures,fromwhereitspawneda
largenumberofdescendants.
ThemainstrandofsuccessorsinthisfamilyisCSound,developedatMITMediaLabbegin-
ningin1985[29],whichhasbeenverypopularinacademicaswellasdancecomputermusic.
Itsmainapproachistouseveryreducedlanguagedialectsfororchestrafiles(consistingof
descriptionsofDSPprocessescalledinstruments),andscorefiles(descriptionsofsequences
ofeventsthateachcallonespecificinstrumentwithspecificparametersatspecifictimes). A
largenumberofprogramsweredevelopedascompositionalfront-endsinordertowritescore
filesbasedonalgorithmicprocedures,suchasCecilia[23],Cmix,CommonLispMusic,and
others. CSoundcreatedacompleteecosystemofsurroundingsoftware.
CSound has a very wide range of unit generators and thus synthesis possibilities, and a
strongcommunity; theCSoundBookdemonstratesitsscopeimpressively[4]. However,
for sonification, it has a few substantial disadvantages. Even though it is text-based, it
usesspecializeddialectsformusic,andthusisnotafull-featuredprogramminglanguage.
Any control logic and domain-specific logic would have to be built into other languages
orapplications,whileCSoundcouldprovideasoundsynthesisback-end. Beingoriginally
designedforofflinerendering,andnotbuiltforhigh-performancereal-timedemands,itisnot
anidealchoiceforreal-timesynthesiseither. OneshouldemphasizehoweverthatCSoundis
beingmaintainedwellandisavailableonverymanyplatforms.
Graphicalpatching: Max/FTStoMax/MSP(/Jitter)toPD/GEM
ThesecondbigfamilyofmusicsoftwarebeganwithMillerPuckette’sworkatIRCAMon
Max/FTSinthemid-1980s,whichlaterevolvedintoOpcodeMax,whicheventuallybecame
Cycling’74’sMax/MSP/Jitterenvironment4. Inthemid-1990s,Puckettebegandeveloping
4http://cycling74.com/products/maxmspjitter/
LaboratoryMethodsforExperimentalSonification 243
anopensourceprogramcalledPureData(Pd),laterextendedwithagraphicssystemcalled
GEM.5 Alltheseprogramsshareametaphorof"patchingcables", withessentiallystatic
object allocation of both DSP and control graphs. This approach was never intended to
be a full programming language, but a simple facility to allow connecting multiple DSP
processeswritteninlower-level(andthusmoreefficient)languages. WithMax/FTS,for
example,theprogramsactuallyranonproprietaryDSPcards. Thus,theusualprocedure
formakingpatchesformorecomplexideasoftenentailswritingnewMaxorPdobjectsin
C.Whilethesecanrunveryefficientlyifwellwritten,specialexpertiseisrequired,andthe
developmentprocessisratherslow,andtakesthedeveloperoutofthePdenvironment,thus
reducingthesimplicityandtransparencyofdevelopment.
Intermsofsoundsynthesis,Max/MSPhasamuchmorelimitedpalettethanCSound,though
arangeofuser-writtenMSPobjectsexist. SupportforgraphicswithJitterhasbecomevery
powerful,andthereisarecentdevelopmentoftheintegrationofMax/MSPintothedigital
audioenvironmentAbletonLive. BothMaxandPdhaveastrong(andpartiallyoverlapping)
userbase;thePdbaseissomewhatsmaller,havingstartedlaterthanMax. WhileMaxis
commercialsoftwarewithprofessionalsupportbyacompany,Pdisopen-sourcesoftware
maintainedbyalargeusercommunity. MaxrunsonMacOSXandWindows,butnoton
Linux,whilePdrunsonLinux,Windows,andOSX.
Real-timetext-basedenvironments: SuperCollider,ChucK
TheSuperColliderlanguagetodayisafull-fledgedinterpretedcomputerlanguagewhichwas
designedforprecisereal-timecontrolofsoundsynthesis,spatialization,andinteractionon
manydifferentlevels. Asmuchofthischapterusesthislanguage,itisdiscussedindetailin
section10.3.
TheChucKlanguagehasbeenwrittenbyGeWangandPerryCook, startingin2002. It
isstillunderdevelopment,exploringspecificnotionssuchasbeingstrongly-timed. Like
SuperCollider,itisintendedmainlyasamusic-specificenvironment. Whilebeingcross-
platform, and having interfacing options similar to SC3 and Max, it currently features a
considerablysmallerpaletteofunitgeneratorchoices. OneadvantageofChucKisthatit
allowsveryfine-grainedcontrolovertime;bothsynthesisandcontrolcanhavesingle-sample
precision.
10.3 SuperCollider: Building blocks for a sonification
laboratory
10.3.1 OverviewofSuperCollider
TheSuperColliderlanguageandreal-timerenderingsystemresultsfromtheideaofmerging
bothreal-timesynthesisandmusicalstructuregenerationintoasingleenvironment,using
thesamelanguage. LikeMax/PD,itcanbesaidtobeanindirectdescendantofMusicN
and CSound. From SuperCollider 1 (SC1) written by James McCartney in 1996 [19], it
hasgonethroughthreecompleterewritingcycles, thusthecurrentversionSC3isavery
5http://puredata.info/
244 Bovermann,Rohrhuber,deCampo
maturesystem. Inversion2(SC2)itinheritedmuchofitslanguagecharacteristicsfrom
theSmalltalklanguage;inSC3[20]thelanguageandthesynthesisengineweresplitintoa
client/serverarchitecture,andmanyfeaturesfromotherlanguagessuchasAPLandRuby
wereadoptedasoptions.
Asamodernandfully-fledgedtext-basedprogramminglanguage,SuperColliderisaflexible
environmentformanyuses,includingsonification. Soundsynthesisisveryefficient,and
the range of unit generators available is quite wide. SC3 provides a GUI system with a
varietyofinterfacewidgets. Itsmainemphasis,however,isonstablereal-timesynthesis.
Havingbecomeopen-sourcewithversion3,ithassinceflourished. Today,ithasquiteactive
developerandusercommunities. SC3currentlyrunsonOSXandLinux. Thereisalsoa
lesscompleteporttoWindows.
10.3.2 Programarchitecture
SuperColliderisdividedintotwoprocesses: thelanguage(sclang,alsoreferredtoasclient)
and the sound rendering engine (scsynth, also referred to as server). These two systems
connecttoeachotherviathenetworkingprotocolOpenSoundControl(OSC).6
SuperColliderisaninterpretedfully-featuredprogramminglanguage. Whileitsarchitecture
ismodeledonSmalltalk,itssyntaxismorelikeC++. Keyfeaturesofthelanguageinclude
itsabilitytoexpressandrealizetimingveryaccurately,itsrapidprototypingcapabilities,and
thealgorithmicbuildingblocksformusicalandothertime-basedcompositions.
In contrast to sclang, the server, scsynth, is a program with a fixed architecture that was
designedforhighlyefficientreal-timesound-renderingpurposes.Soundprocessesarecreated
by means of synthesis graphs, which are built from a dynamically loaded library of unit
generators(UGens);signalscanberoutedonaudioandcontrolbuses,andsoundfilesand
otherdatacanbekeptinbuffers.
Thistwo-foldimplementationhasmajorbenefits. First,otherapplicationscanusethesound
serverforrenderingaudio;Second,itscaleswelltomultiplemachines/processorcores,i.e.,
scsynthcanrunononeormoreautonomousmachines;andThird,decouplingsclangand
scservermakesbothverystable.
However,therearealsosomedrawbackstotakeintoaccount. Firstly,thereisalwaysnetwork
latencyinvolved,i.e.,real-timecontrolofsynthesisparametersisdelayedbythe(sometimes
solelyvirtual)networkinterface. Secondly, thenetworkinterfaceintroducesanartificial
bottleneckforinformationtransfer,whichinturnmakesithardtooperatedirectlyonaper
samplebasis. Thirdly,thereisnodirectaccesstoservermemoryfromsclang. (OnOSX,
thisispossiblebyusingtheinternalserver,soonecanchooseone’scompromises.)
SuperCollidercanbeextendedeasilybywritingnewclassesintheSClanguage. Thereisa
largecollectionofsuchextensionlibrariescalledQuarks,whichcanbeupdatedandinstalled
fromwithinSC3.7 OnecanalsowritenewUnitGenerators,althoughalargecollectionof
theseisalreadyavailableassc3-plugins.8
6http://opensoundcontrol.org/
7SeetheQuarkshelpfilefordetails
8http://sourceforge.net/projects/SC3plugins/
Description:documentation of the algorithms and the intentions behind them [17] SoniPy is a recent and quite ambitious project, written in the Python language [33]. Its three note arpeggio permits comparing columns for similarities.