Jump to content

Glenn Stanton

Members
  • Posts

    2,136
  • Joined

  • Last visited

Everything posted by Glenn Stanton

  1. yeah, or you could have a couple of separate drum busses for percussive and cymbals and parallel compression etc. then combined on the final drums buss feeding the master. either way, if you were creating a stem for someone to use, you'd likely want all the bits included in that.
  2. well, sounds like the real short cut is using the "save as" feature ?
  3. wouldn't busses be stems? thinking out loud - export entire mix - HW output (or equiv) - assume all unmuted tracks or only soloed ones; export tracks = tracks selected -- kick, snare, bass, guitar 1, guitar 1, vocal 1, back vox 1, etc, and export "stems" would be export busses - drums, guitars, keyboards, vocals, etc . at least this is how it seems to work for me.
  4. i export the tracks on project A (using export dialog and whole tracks) and import into project B (either import dialog or drag and drop). for MIDI, save the project A as a MIDI file and import the MIDI file into project B. then re-arrange the tracks as desired ? also helpful to make presets for effects and instruments if you need to re-use those and can re-apply them in the project B.
  5. as a quick note -- reading the manual: https://eadn-wc05-7545739.nxedge.io/wp-content/uploads/2022/09/RP500_OM_EN.pdf "8. USB Jack The USB jack connects the RP500 to a computer and provides two purposes. First it is used to provide communication between the RP500 and the X-Edit editor librarian software. Second, it is used to stream four channels of audio (2 up / 2 back) to and from the computer when using the RP500 to record with the included Cubase LE4 recording software. Refer to the RP500 Software Installation Guide and Cubase LE4 online documentation on proper setup for this use." so it's says Cubase, but it might mean that ASIO4ALL could be the "driver" they recommend (which if this is the only IO unit, isn't the worst (RealTek is...)) with the understanding that the latency will not be as good as a quality IO unit would give you.
  6. yes, the side wall could effectively be a "gobo" that has absorption on one side and hard on the other. i was thinking in terms of the idea that if you threw up a simple partition wall (2x3 frame, 1/2" gwb, packed with soft insulation) it would be effective as its dense enough to shape the sound pressures yet still absorb (plus the opening "vent" to further let LF "leave"), and useful later as a "room". for the wall -- probably $300 max + 2 days elapsed between fabrication (4 hours?) and the drywall joint material to dry, then paint. (paint is likely the single most expensive individual component ? ) the actual dimensions will depend on the ceiling height - here i've assumed 8ft
  7. one interesting thing - if you put in the partition so the space is 11' wide x 13' deep, it's a decent working ratio... add some absorption and ceiling too. maybe a drape on the opening to reduce reverberant behaviour.
  8. you could still fit into your space - i'd still recommend the side partition (think cubicle wall w/ plywood ? ) SAF could even be higher since you'd now be hiding the clutter from the family's gaze... lol on the "back wall" - a set of absorbers and a small one on the door. and one on either side of the desk.
  9. one option is to move to the wall adjacent to your TV area and put up a baffle (wall? partition? etc) to create a symmetrical space. then treat.
  10. it would be helpful to know where there are doors and windows, things like fire places, utilities, SAF things which cannot be moved, etc. that helps to avoid long thread of oh, i forgeot to mention the ... cannot be moved or it's a door for the neighbor, etc ? that said - there are a few simple things to get started: 1) position for symmetry -- speakers -- start as 60° triangle (meaning from your seat they are each on 30°), and the speakers and seat form an equilateral triangle. aim the speakers to be focused about 18" behind your head and about 4-6" above your ears. 1a) you should seat yourself about 37.5% - 39.7% distance of the length. so if you have a 20ft room, you will START at about 7' 8", depending on speakers, and so on, you'll adjust positions. seating, speakers, etc. you are looking for: - least disruptive LF response - generally as smooth as possible - in as large a space as possible (the "sweet spot"). this means moving speakers close to the wall, farther from the walls, your seat closer, and further, etc. - solid phantom center (assuming you're doing stereo, surround is a different thing) with symmetry so you can hear panning etc. 2) position for backwall impulses (reflected sound) to be lower than your direct speaker sound - this is the reason people tell you to shoot down the long path, but it's not always an option - so more absorption behind you to attenuate the mid and high off that back surface. 3) add some treatment on walls and ceiling for first reflection points. a simple hand mirror and laser pointer (assuming no cats around) you can quickly spot the points to hang some absorbers.
  11. if you have created the region fx but not rendered them, can't you just select the clips and use "remove region fx"? just a quick test with a bunch of different clips across different tracks i could create the region fx and also remove them. if you rendered them, then the clips will have the region fx (in your case, declip) applied.
  12. i use the task queue. by default, my templates have two "default export" which is simply a 2-track WAV or MP3 from the master buss with all (except the noise track) selected. in my record template a task to export each track into the /export audio/tracks folder. and then in my master, i have 10 generic tasks, one per track, and i tailor each one - duration, mp3 metadata, etc. which exports the track as mp3 into the mastering project /export audio folder. all this does a few things - i just check the box for which tasks i want to run, and done. the caveat is that if i add tracks, i have to recall task settings, do the updates, and save it. but reducing tracks is no problem. secondly i can export all tracks, a 2-track WAV, or 2-track mp3. each or all of the above.
  13. actually since 1990 or so 30+ years ? i can fondly remember my days consulting on Wall St and watching the Apple-Windows battles and the financial companies screaming at Intel, Apple, and Microsoft to stop being f**k-ups and make their products work. good to see they've kept up the traditions of bugs and hamstringing...
  14. weird given they tend to work closely with the CPU folks... Compare Windows 11 Business Editions | Microsoft https://www.microsoft.com/en-us/windows/business/compare-windows-11 i think the difference with physical CPU vs single CPU - multiple CPU are complex multi-buss beasts to share IO, memory, scheduling etc. across physically separated sub-components. whereas, a single CPU with (say 64-cores ? ) is still everything under one roof. so it's likely the consumer OS (which i believe is hamstrung restricted by the software deliberately...) should be able to use it. the next choke point is the RAM allocation per core. i think most times, 1GB / core is a bare minimum nowadays. probably best to have 4GB / core if you can afford it.
  15. i think once you get beyond a certain number of cores and/or physical CPU, the Windows 10 etc start to get a bit weird. and if you're running a server type machine with mutiple Zeon CPU + NUMA channels and beyond, you might need to look at the Windows Server OS. i seem to recall someone a while ago had posted on a site about 6 18-core Zeon with 512GB RAM on a older server with Windows Server 2012 R2 OS and was running SONAR ok.
  16. if you want a more flexible "listening" app for your 2-track (or multi-track if you're doing that), then an audio editor like SoundForge, Acoustica, etc can be used in lieu of a media player which may (or not) have "stuff" enabled to make things "sound better" (even if you think you disabled it). the advantage is, if you find something which is nagging at you, those apps let you do some surgical tweaks (which you can go back and adjust in your mix later) to see if there are things you might want to change before the mastering step (or even after if you dare ? )
  17. as a note: depends on where you're looking at the square waves - in the digital (and most times electrical) space - they're nearly instant vertical up and down voltage changes (depends on slew rate = slope) although path capacitance and other factors can distort them, but correct - a square wave produced audibly is a composition of sines (as all audio waves are).
  18. as a note: i've using the Waves API 500 series EQ plugins in lieu of the timeworks EQ that i used on older projects as i found they seem to have a similar effect that i was looking for (my normal go-tos like the PC eq, fabfilter, etc just didn't quite get there - like bell-like resonance). possibly that was just bad coding or original defects in the HW that the Wave folks copied.... lol ?
  19. coolio. one thing i periodically do is to run a 1KHz signal through an effect to see (via an oscilloscope) what harmonics added (or not) versus published hardware data on the same. some high end effects claiming to match hw somehow miss the boat completely, and some low end effects nail it. it will be interesting to see your results
  20. it may be wise to uninstall the plugin, backup the presets folder if it has one, then check if any left over registry entries and remove those. then install it fresh. if it still crashes, you should also follow the steps to get a crash dump and provide to the support team so they can look into it as well.
  21. so - you are using the timeworks plugins on other projects ok? and only two had the issue. if you insert a new track in a new project and add the mono version, it crashes - is the track a stereo track? sometimes older (and unsupported) plugins don't like stereo when its expecting mono. just noticed you're using jbridge - on a dx effect? one thing i've been doing on some older projects (like 8.5) is document all the effects and export all the tracks as WAV and import the WAV into a wholly new CbB project. then if the tracks are using a plugin which i cannot install or otherwise use, i'll recreate the effects in newer effects as much as possible. or listen to the older tracks to see if i can recreate it (mostly this happens with the revalver amp that i cannot open anymore). delays. eq. reverbs. modulations etc are all pretty straightforward to recreate, like the lexicon reverb i no longer have installed - so i use my slate reverbs to re-do)
  22. one way to cheat - i save my (mostly) completed mix project as a template. a normal .cwt file. copy over to the core/track templates folder and rename it to .cwx (a track template). the insert into a blank project. everything except the audio bits is not there. all the tracks, busses etc. the downside - all your buss colors are missing (in my case take a minute or two to reset) and still need to get audio into it. my main method nowadays - use a recording template to record or load client files for cleanup etc. rename to match my templates. then export all tracks, then import them into my mix template which has most of the routing i do routinely already prepped. no dragging and dropping across projects etc. then just remove stuff i'm not using or add a few i think i need etc. so using that methodology, using the mix project -> template -> rename as cwx, and import into blank project and feed the recording and export, and voila - end up in same place. really just making sure that each project has all the same (mostly) settings after getting a mix completed that will form the basis for the rest. works good when recording live instruments and the # of instruments are essentially the same and looking for a consistent sound across the tracks. i don't mind have multiple projects to do each step as projects are light, and the number of audio files with good naming, aren't necessarily exploding in #'s so overall space is not a problem.
  23. exactly - if you are using the direct monitoring on the IO so there latency for the vocalist (e.g.) the latency on the round trip to the reverb would likely not be impactful since the full effect of its only really after the voice signal stops. but on effects being played directly (e.g. a guitar solo) which need to be heard, then you're almost always going to hear the latency. one option is to use hardware effects for the player, record both wet and dry versions, re-amp if needed. i also tend to mix down a 2-track and create a separate project for vox and solos anyways - then there is very little overhead so latency (on my current setup) can be < 5ms round trip with many effects.
×
×
  • Create New...