Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

2 Neutral

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi all, If you haven't done so already, I would recommend that you load this with another DAW (better yet, more than one additional one) to ensure that it's not the DAW. Many DAWs out there still have problems with VST3 implementations of various plugins. The fact that some VST3s work fine in a particular DAW should not be interpreted as conclusive evidence that the DAW is not at fault here. As an example, the latest release of Samplitude Pro X4 Suite has numerous problems with certain VST3s, including some that I have written, but also including SampleTank 4. It works fine with others, and the VST2 versions work fine. Another example is Waveform 9 which had problems with certain VST3s, including AGain, the example VST3 plugin from Steinberg. Waveform 10 appears to have fixed this one. Mixcraft Pro Studio 8.1 also had problems with AGain VST3, and I believe that the current release still does. Regards, Dave Clark
  2. Hi Chandler, I'm not sure if I would use this or not in mixes, but I can see where a developer may be interested in this, provided that the use I have in mind is permissible under the license. Is it permissible to export an IR that is not a global preset or profiles preset (i.e. one that had been edited or started from scratch) for inclusion in a guitar amp sim by the developer for use by third parties? This could prove to be a great way to create IRs for a number of cabs for a developer that doesn't have access to lots of amps and cabinets and/or isn't particularly interested in recording a bunch of stuff but would rather stick to circuit modelling. There are cases where the sounds of certain types of synths may be used in building instruments for third parties even without permission (synths that are not sample-based, for example), and this application appears to be similar to those, so it may not matter what the licensing terms are (i.e. may be unenforcable), but it's simpler if everyone is clear and agrees up front. If the use I have in mind is already permissible, then I would seriously consider getting this. I would use it to expand the cabinets I have available in my own amp sims by perhaps a dozen or so, not anywhere near the unending number than MCabinet would be capable of providing. This would be used to simplify things, not complicate them. I've actually tried a few times to build IRs for cabs, but it takes a lot of filters to accomplish anything anywhere nearly correct, so I'm really happy to see this where all that stuff is included, and I didn't have to do it. Thanks for any info. Regards, Dave
  3. Hi Chandler, Thanks for your reply and the reference which I have so far only briefly examined due to time constraints. Before asking a simple question below that will probably clarify everything, at least for me, please allow me to make a few preliminary comments so that the question itself is clear. To do physical modelling, one creates a mathematical or numerical analogy. In the analogy there must be certain correspondences between the real object being modelled and the analogical model in order to qualify as a physical model. These include: 1) Geometry, for example a mesh in the model. 2) Material parameters, e.g. the speed of sound in the relevant material. 3) Physical laws, e.g. the appropriate equations. 4) Boundary and initial conditions, e.g. the velocity of mesh points at the position of the strike at t = 0+. The obvious advantage of true physical modelling, not something that is merely called “physical modelling” (*), is that one can ask the modelling person to directly adapt an existing model to other conditions, making the SAME CHANGES IN THE MODEL that one makes with real objects. Take for example, your existing model of striking a glass with something and modify it in a fashion analogous to what one can do with real objects. A second glass to be struck is half the height of the first glass, but has twice the value of all the diameters along its length. The glassy material of the second glass has a speed of sound that is 15% higher than that of the first. The second glass is struck 1.5 cm below the location of the strike of the first glass. Using a physical model, the modeller would be able to predict what this would sound like, even if not accurate due to incompleteness. Can you do that? Please note that I’m not asking whether or not you can “by hook or crook” create an audio file that you think sounds like the situation described; I’m asking you if you can make the same analogous changes to the model of the glass being struck (geometry, material parameters, IC/BC) as you would to the real objects involved. If not, the model is not a physical model. On the other hand, if MSoundFactory can do this, then your video has completely misled me about what MSoundFactory can do. I just viewed about half of it again, and I see absolutely nothing that convinces me that it can accomplish what I am asking. In fact at 11:00 you discuss a glass harmonica. Rather than physically model a finger rubbing a glass, you turn up the resonance of the modal filter! In a physical model, you would, for example, disturb the mesh points of the boundary of the mesh and allow the physical model to behave like the real object. If MSoundFactory can do the latter, then your demonstration is highly misleading. Thanks again, and I really do think that this stuff you are doing is great. I looked briefly at Modalys and am interested in checking that out further when I have time. Regards, Dave (*) Does calling a whale a fish make it true? IMHO no. If somebody calls a whale a fish, am I obligated to do so? IMHO no. Am I obligated to attempt to educate? IMHO yes.
  4. Hi Chandler, Sorry to have to say so, but it's neither modal synthesis nor physical modelling. In the same reference as I cited above, modal synthesis is described on page 10. The first step is to solve the eigenvalue problem for the system, considering the system of PDEs as well as "information about material properties and geometry, and associated boundary conditions." (Bilbao, page 10.) You aren't doing that. At about 4:30 in the video, you are moving the cursor about in a time-domain recording and getting very different representations in the frequency domain, then choosing one based on looks and sound. You are leaving out the first step that involves detailed consideration of the physics of the system. If you were doing physical modelling, you could reshape the object you were modelling and solve the new eigenvalue problem. Your method requires you to get a new object, record the sound of it being hit by something in a way that is also not modelled at all, then fiddle around with the cursor to get some new representation of it in the frequency domain, construct a time-domain representation of it, then fiddle around with it some more. You are also not physically modelling decay; you are imposing a decay function on top of the time-domain signal that would actually go on forever after your reconstruction of it from the frequency components, if not for a imposed decay. You are also not modelling different sizes of a physical object to obtain different notes. Instead you are manipulating the spectrum and/or the reconstructed signal with mathematical operations that often turn out to be poor approximations to physical changes. Again, I think this stuff is great and I think people would find it useful, but also again, it's not physical modelling, nor modal synthesis as you are claiming. You may be doing some of the latter steps of modal synthesis, but you are not starting at the beginning. You left out the physics of the system. Yes, some of what you do mathematically can be in part be justified by physical principles, but that doesn't make it physics. Regards, Dave Clark PS: Modal synthesis involves use of a "shape matrix," projecting modal functions onto an observation state, does not involve temporal Fourier transforms, and generates the output waveform directly in the time domain. I don't see any signs that you're doing any of that. What you are doing looks to me like good ol' Fourier analysis followed by some editing, then constructing time-domain signals, then manipulating those. Just regular DSP. I would be interested to know if you are actually doing any of the steps of modal synthesis as Bilbao describes it. In my own room modelling, I solve the PDEs in 3-D (wave equation in 3-D), obtaining the eigenfrequencies and amplitudes (in a statistical approximation sum), then switch to DSP to construct IRs. Strictly speaking, this is not modal synthesis either, but I claim it is physical modelling.
  5. Hi Chandler, Thanks for this video. The only problem I have with it is the title that includes "physical modeling." This isn't physical modeling as those who actually do such things regard it. Consider this from Stefan Bilbao's book (*): "Physical modeling synthesis ... involves a physical description of the musical instrument as the starting point for algorithm design. For most musical instruments, this will be a coupled set of partial differential equations, describing, for example, the displacement of a string, membrane, bar, or plate, or the motion of the air in a tube, etc." He goes on to describe the correspondence between physical parts of an instrument and their representations in analogous locations in computer memory. This latter is completely absent in the techniques you are illustrating. The synthesis you are demonstrating is a frequency-domain operation. I think it's great, but it's not physical modeling. Regards, Dave Clark (*) Stefan Bilbao. Numerical Sound Synthesis: Finite Difference Schemes and Simulation in Musical Acoustics. Wiley, 2009, page 8.
  6. Hi RexReed, If you must think of transformers or transformer simulators as “instruments,” and I’m not really objecting to that, I would encourage you to think of them as not being instruments such as an FM synthesizer, but rather as instruments like a trumpet or clarinet, or even a microphone as used by a vocalist. These latter instruments require a source of sound; FM synthesizers contain their own sources. Because there is no source, transformers don’t have the ability to self-modulate as can be done with FM synthesizers. In addition, from your comments it appears that you may be under the impression that the transformer simulator analyzes the incoming signal, then directly creates and emits corresponding harmonics. This is almost certainly not true. There are equations which govern how real transformers work. A model of this can be created in software which indirectly or passively creates appropriate harmonics. The harmonics (and what you have described as their modulation) is a result of properly modelling the physics, not a result of running more oscillators, LFOs, or anything like that. Regarding the “warm” sound: As the article I referred you to describes, the physics of transformers, therefore the models used in the software, create less intermodulation distortion compared to other devices, resulting in a “warmer” sound. From the article (Whitlock, page 10): “Distortion in audio transformers is different in a way which makes it unusually benign. It is caused by the smooth symmetrical curvature of the magnetic transfer characteristic or B-H loop of the core material shown in Figure 9. The non-linearity is related to flux density which, for a constant voltage input, is inversely proportional to frequency. The resulting harmonic distortion products are nearly pure third harmonic. In Figure 18, note that distortion for 84% nickel cores roughly quarters for every doubling of frequency, dropping to less than 0.001% above about 50 Hz. Unlike that in amplifiers, the distortion mechanism in a transformer is frequency selective. This makes its IM distortion much less than might be expected. For example, the Jensen JT-10KB-D line input transformer has a THD of about 0.03% for a +26 dBu input at 60 Hz. But, at an equivalent level, its SMPTE IM distortion is only about 0.01% — about a tenth of what it would be for an amplifier having the same THD.” As the article also describes, there is less of a rolloff at the critical frequency than exists for a simple RC filter, so there is also less phase distortion at rolloff. From the article (Whitlock, page 11): “This results in an actual roll-off rate less than 6 dB per octave and a corresponding improvement in phase distortion (deviation from linear phase). Although a transformer cannot have response to 0 Hz or dc, it can have much less phase distortion than a coupling capacitor chosen for the same cutoff frequency. Or, as a salesperson might say ‘it’s not a defect, it’s a feature.’” Taken together these result in a more pleasant type of distortion than the kind of devices that we often utilize for creating distortion. A lot of listeners may even respond, “What distortion?” because it may not even sound like “distortion” to them. Regards, Dave Clark Reference: Bill Whitlock. Audio Transformers. Focal Press, 2006. Formerly published as Chapter 11 in Glen Balou, editor, Handbook for Sound Engineers, Third Edition, 2001. On the web: https://jensen-transformers.com/wp-content/uploads/2014/09/Audio-Transformers-Chapter.pdf
  7. Hi RexReed, A good article about audio transformers, but technical: https://jensen-transformers.com/wp-content/uploads/2014/09/Audio-Transformers-Chapter.pdf In order to fully appreciate answers to some of your questions (and to ask more informed ones), you'll have to understand at least some of this material. Suffice it to say for now that the saturation of an audio transformer is fundamentally quite different from the saturation of other nonlinear devices such as transistors, diodes, and tubes because it arises from completely different physical mechanisms. Regards, Dave Clark
  • Create New...