-
Posts
712 -
Joined
-
Last visited
-
Days Won
1
Everything posted by Amberwolf
-
I don't suppose anybody knows if this is one of the few libraries built with a sample folder full of sound files, or it's a monolithic kontakt-format soundfile? (asking first before I download over 3gb just to find out) I'll never have the full Kontakt version to use it, but if it has the individual sounds available I can still use those to create music (or whatever ) with, as I do with other sound libraries and sample sets.
-
FWIW, I wouldn't be able to use any of the output of these tools at all, except that I'm pretty good at "macgyvering" stuff (of any kind) together out of whatever I have available (a skill learned out of necessity from a young age and honed over the decades to a finely invisible edge ). So like everything else I do, I take the bits it spits out that I see potential in, then chop them up and mix them with each other and other things and my own direct input and get something resembling what I would create if I actually had everything I *really* needed to make it.
-
Forgot this part: I have also tried out a couple of sound-generation tools that work by inputting a text description, but neither one understands any type or form of any words that actually describe sounds, and cannot generate what is asked for, and again it can't edit the results, just start over with a result that is also wrong in a different or similar way. I also tried one of the from-scratch music generation tools, just to see how bad it was, and it is just as bad as all the others, for exactly the same reasons--it cannot accept and use any terminology related to music creation or description, and it cannot edit the results only start over with yet another totally wrong result that can't be edited.... So they are all exercises in frustration if you are an actual artist trying to create a specific result. i wouldnt let any of these tools anywhere near my actual music; there is zero chance they'd do something useful to it. I suppose if a user doesn't have a specific result in mind, and just wants something they don't have to put significant work into to fill a hole in something, they can be used, but otherwise... "not ready for prime time" is a huge understatement. In conclusion: I don't think that any of these tools will really get better until they listen to the experienced people using them that actually *are* artists of that type, and remake the tools so they can be used correctly, *and* they make them able to edit them iteratively rather than starting over on every try. It would help if the people creating the tools had any idea how to use such tools to create things with, since they would be able to build them to be used to create specific things instead of stuff that randomly happens to resemble the input description....sometimes.
-
Unrelated to audio, just graphics, but I have previously posted some thoughts (and complaints) about the way the tools I've tried out for my album cover "art" don't "listen" to the user very well, or at all. A general summary below: I think the main problem with all of these tools is that none of the people creating them is familiar with the art process, terminology, etc., for each of the different types of tools, and so the tools don't "know" any of the terminology or processes required to be able to describe a specific result. Thus, the user can't specify things in a well-known way and get a predictable result. Additionally, none of the tools are editably iterative. Meaning, you can't take the result you got, tell the tool what is wrong with it, and have it just go back and correct those things, leaving everything else the same. Instead, all of the tools start over and the user has to provide an updated descriptional guesswork that hopefully gets the tool to spit out a more useful result.... But it will never be the exact result desired, because it can't be edited, and it can't be specified correctly in the first place. For instance, in an image tool, you cannot even specify point of view, lighting, camera focus, angles, relationships between objects, etc. Well, you can state them, but none of the tools obey any of them. You might get lucky and ask for something in a sunset lighting and get yellowish lighting but also get the sunset in the image even though you asked for it to be *lit by the sunset behind you*. Etc. Most of the "cover art" on my bandcamp and soundclick stuff has been updated with "AI assisted" images...some of them are completely AI generated, but I've had to edit some of them significantly, or even composite various layers of various separately generated pieces to get a useful result (like the Gareki covers, and the banner image, etc). Still, none of htem is the result I really wanted, but I don't have time to do the artwork *and* the music; each takes dozens ot hundreds of hours, so I have to pick one and that's the music. If I had money I'd pay a human to do the art for me, but I don't...the music will never earn me anything so it certainly won't pay for the art, and besides myself I have a shaggy fourlegged people-sized monster to feed....
-
Slip Edit, Audio Transients or Slip Stretch , Time Stretch?
Amberwolf replied to sadicus's topic in Cakewalk by BandLab
As a few examples, I used all of the techniques to line up , stretch, and fit all the hundreds of wolf amd environmental sounds in Ookami no Kari no Yume (Wolf's Dream of the Hunt) https://amberwolf.bandcamp.com/track/ookami-no-kari-no-yume-wolfs-dream-of-the-hunt starting mostly with longer clips of group wolf howls and such, panting and other sounds from my own St Bernard, birds, forest sound, etc., using split and slip edits to break many of these up into other clips. Some of the individual wolf howls were long series of howls with time between them that were then split and broken up into clips of single howls, and then those were moved to line up their start or end with some event in another track, then "slip stretch' was used to lengthen or shorten the clip so the other end of it would line up with a different event. Some of those were split into multiple clips themselves, so that the split ends stayed where they were, but I could now stretch or squish *both* ends without moving that split spot so I could line it up wiht three events, or preserve some other characteristic. Some clips were rretimed using AudioSnap by adding markers where desired and draging those around, not always to put them on a beat but sometimes to move them off on purpose to draw attention to them to refocus the audience. Many things were pitch corrected in certain spots to either match or contrast with other events. The flute-like main high synth was rendered to audio and many of the clips chopped up to pitch shift pieces of them, or to slip stretch or squish them in time to fit on or between other events. Most of the taiko percussion was clips slip edited down to just enough to accent something else, clip-faded where necessary. Some were slip stretched for time fit instead of slip edting if they were more "in the open" so they were more natural sounding. Intron 159 https://amberwolf.bandcamp.com/track/intron-159 used split and slip editing to break up a pair of single percussive clips I had that i started the project with as overlapping loops for the whole project lenght. That broke them up into bits for just the areas tehyw ere needed in, and then some of the specific hits were clipped out and placed elsewhere , often time stretched or squished, soem pitch shifted, to accent or interrupt. Similarly many of the main synth lines were done the same way, once the Z3TA2 synth line was rendered out as audio. The backing ambience sounds were also split, stretched, etc., to fit as needed. I'm Sure It's Nothing, But... https://amberwolf.bandcamp.com/track/im-sure-its-nothing-but virtually every vocalization was time stretched or squished to fit or line up with something. Some of hte percussion was as well. Gareki and Gareki II https://amberwolf.bandcamp.com/track/gareki https://amberwolf.bandcamp.com/track/gareki-ii-alternate-version-cinematic-feel Vocalizations, percussion, synth clips, ambience, etc., all had some pieces manipulated with one or another of these tools. So did Less Like A Whisper https://amberwolf.bandcamp.com/track/less-like-a-whisper but to a much greater extent. The percussion tracks were the most heavily modified, but the many vocal pieces were quite a lot of work. It's still not really done; I just ran out of ability to figure out how to fix some of the issues without causing others, but I have learned a lot in the several pieces created since then that will help me when I go back to LLAW. A Peek Over The Wall https://amberwolf.bandcamp.com/track/a-peek-over-the-wall was a second experiment in putting together things that were not made to go together (LLAW was more or less the first), but were instead pieces of "song kits" (which I don't really have a use for except for the pieces I can build something totally different with, like this), to make something with a feel of it's own, and all of the pieces were extensively altered with all the tools discussed above. If I Should Wake https://amberwolf.bandcamp.com/track/if-i-should-wake was a third even deeper experiment in literally throwing something together from assorted whatever, then forcing things to fit with all of the tools discussed here. Just Give Me A Voice https://amberwolf.bandcamp.com/track/just-give-me-a-voice The entire lead guitar part was chopped up and rebuilt completely form what I'd played in in a scratch garbage session, using all the tools to completely redo and modify the recorded clean guitar (a 6 string bass, used as a guitar instead because the spacing lets me finger the strings with it flat on my lap without totally mangling everything since I can't actually play). Some parts of the synth/perc/etc tracks were also rebuilt with the tools after I'd rendered them out as audio, some for effect and some because it was eaiser than redoing them as midi and rerendering. -
I've only used Calibre for reading books locally on my PC, but supposedly it has a converter that can do this.
-
this might not help, but it is apparently possible to convert a pdf to a chm (never tried it); would that give you a useful local help function? (I expect not, but...) https://www.pdffiller.com/en/functionality/convert-pdf-to-chm.htm There's others out htere too, and Im' sure some do a better job than others.
-
Parenthetical post from the **** point in the previous one: The reason I was using the Acoustic Research Powered Partner speakers was because I finally got the music workstation re-set-up, after a long long time (couple years? more?) of doing all my music stuff from the bed. The ASR88 (my main keyboard) is ancient, boots from a floppy disk (unless I power on the equally-ancient SCSI harddisk ), and like most of it's line is a bit picky about which disks work in it...so now I have a Gotek USB-drive floppy-replacment unit in place of the FDD, with a normal boot image on the usb stick in the drive. (you can't do anything at all wiht the ASR unless you first boot it from an OS disk....). It's not really faster than the FDD, but it is not going to mechanically wear out, so much less risk of it becoming unusable due to drive failure. And more likelihood of me using it because i don't have to deal with making a new disk image from the computer when one of the floppies bites the dust again. (and hoping that the few disks I have left, and the USB FDD on the computer, and the drive in the ASR, still work at that point). Also, leaving the disk in the drive is bad for the disk and the heads, as humidity changes can cause damage to both, and so can vibration (such as from the ******* in the area with their earthquake-level sound systems that shake everything frequently, for hours at a time). So I had to remember to push the disk into the drive each time I boot it up....now I don't. Anyway, so it's setup now and one switch will just turn it on and boot it up. At present there are still a few hitches: -i have to still use the (unmovable for reasons) laptop that's over by the bed, so taht also means I have to run a monitor cable over to it from the one that's on the workstation. I can leave that connected and enabled in the cmputer, even when not using hte station, and the monitor is just turned off with the station power. -I have to run audio cables from the external USB audio interface to the PP speakers. --I have to run MIDI from the ASR out to the in of hte USB MIDI interface on the laptop. --I have to run a USB cable from the computer keyboard on the station to the laptop. (the trackball is wireless so I just carry it over there to use it, but there are no dependable wireless keyboards with replaceable keyswitches and keycaps (I wear them out in a year or two or so), programmable backlighting, etc). All of those cables have to be 12-15 feet long or so, because they ahve to run under a carpet to keep JellyBeanThePerfectlyNormalSchmoo from tangling her giant self up in them and dragging everything with her out of the room in a panic. 😆 I don't have the budget for really good cables new, so I carefully choose what I can get from thrift stores, etc., and have everything except a good HDMI cable that is longer than a few feet...they never get those. So I'm using DVI to VGA adapter, and one good VGA cable that's thick and well shielded for half the lenght I need, and a crappy VGA cable that sthin and unshielded for the other half... the two good extensions that could be used both have something wrong with one of the color wires so I am missing a color on each one. But all these VGA cables are up to 30+ years old, saved from olden days when I ran the noizy computer in the bathroom adjacent to the bedroom, with both doors closed, and long cables for everything to get user input and feedback to/from the computer. So it's understandable that they are a bit on the worn side and don't all work as they should anymore. Unfortunatley, thrift stores pretty much never have useful VGA cables anymore, other than occasional 3-foot ones with unshielded wiring.
-
Had to do some searching for that one; I can see where it would. In the version you heard (if it was since late last night) I did some editing using the powered partners instead of the soundbar; i'd forgotten how much more clarity there is in the PP's low end! **** One thing the PPs showed me that cant be heard in the soundbar is the distinct gap between bass note and delay fx modified repeat of that note in the beginning of the piece. They blur together on the SB, and that sounds like I wanted it to. But on the PP's it's quite clear there's a gap between them, which is fine in the majority of the song where there's a beat and other things that take advantage of that gap, or fill it, but in the beginning it's just weird. So i've automated the sub-bass's delay feedback / mix so it's much less at the beginning then increases as it heads into the more populated parts of the piece, because if i cut it back everywhere, some of the "beat" is missing. I also automated some of the eq on that track to leave in the lows at the beginning and reduce them as the delay increases, so the overall lows are around the same. Then I lengthened a few bass notes in the beginning to slur over the delay gap for the ones that weren't fixed by the above, so now it sounds more like I intended. Oddly it still sounds "the same" on the soundbar.... :? Fancied up the snare in some parts to match and/or complement the bass and ohter bits. I was also going to try playing in some more piano bits, but the LP64EQ I had to replace the SonitusEQ with to take out the low end of things appears to be causing several to many times the latency I had without it, making it impossible to play live to it. Didn't want to burn the fx into the tracks yet, so didn't get the added bits.
-
Slip Edit, Audio Transients or Slip Stretch , Time Stretch?
Amberwolf replied to sadicus's topic in Cakewalk by BandLab
As I understand the term, slip edit refers to moving either start or end edge of a clip to expose more or less of the clip contents, and does not move the contents or change them. By itself, this isn't directly useful to line up multiple transients in a clip to other things in the project. But you can use it along with Split Clip (or scissors tool) by cutting a clip up into the sections you want, then slip editing them down to just the bits you want to hear, then dragging them along the timeline utnil they line up where you want them to. I use this to line up percussive or vocalization / etc sounds that are in clips when I don't want to change the length of any parts of the sound itself, just line up specific parts of it with other things. What you refer to as slip edit in the example: would just be a drag move of the clip (or you could use nudge), you wouldn't be editing the clip contents or length, just moving it to line it up. Slip stretch isn't a term I recall but it sounds like it refers to the function that in my ancient version is done by holding the CTRL key and then click-dragging either start or end edge of a clip, which then stretches or contracts the clip contents timewise without changing the pitch of the sound. This can be used to directly line up multiple (but not all) transients in a clip with other things in the project. So if you have something with say, two large transients that need to line up with two beats, and you don't care about the rest of the transients lining up, then you can just stretch it until they're the right distance, then drag the clip until they both line up where you want them to. Or for single transients, and then simply making the sound longer or shorter to fit a specific time amount. I use this for some vocalizations or other sounds that may have been recorded for different tempo, but I don't want to sit there and figure out what that was vs what I have now and some ratio math, I just move the clip till the start (or end) edge of the clip is where I want it, then ctrl-click-drag the other edge until it fills the space I want, shortening or lengthening it as needed till it sounds the way I want. Then after saving a version of the project I'll bounce that to a clip, since the realtime math it does doesn't sound as good as the offline bounce math and I need to hear things in the way they'll sound for the final result in order to compose. If I change my mind later I can just go back to the saved version and copy/paste the original clip over to the current one. (or reimport the audio, etc). You can also just go into the clip properties dialog and directly change the clip time, pitch, length, etc., numerically in one or another of the fields. I think that most of them are exclusive; if you're using one method you can't simultaneously use another, though it would be nice if it was possible for experimenting before bouncing--instead you have to change one then bounce the clip, then another if you need to do two or more of them to alter a clip to your liking. If there are multiple transients you need to line up, and you want the clip to remain intact in length, etc., then Audiosnap is the "easiest" way to do this. You can use the AS quantize, or you can manually line them up, or first the one then the other; whatever fits your workflow better or makes the sound / effect you are after. I use this for almost anything I have to record in as audio, as I don't have physical control good enough to get the timing I want. So I slice it up into the clips I'm keeping vs the ones too crappy to be worth fixing, then enable AS on the clip, turn on the transients markers I want to move around and the ones I want to remain exactly where they are, and disable the others. Then move the transients I want till it sounds right (which is usually not exactly on any beat, depending on the rhythm I'm after), and save the project as a new version, then bounce the clip (for the reasons previously noted). There are also ways to combine each of these things in various orders to achieve other results, including deliberate distortions of sound or timing, so which one you use depends on how you want the final result to sound, and the specific work you're trying to do. Depending on your source material, result desired, and how far you have to push timing, you may need to use one technique on some parts, and another technique on others. -
The current one should be up on the bandcamp page https://amberwolf.bandcamp.com/track/im-sure-its-nothing-but
-
Can you send each set of tracks to it's own bus, then group the busses instead?
-
Long ago when I used to "play" live at scifi conventions, I was in a hotel lobby with my Ensoniq on a standup rack and my heavy Powered Partner speakers on top of it, and some drunk guy from a wedding party at the same hotel knocked into me and one of them hit the floor. From then on it had a buzz with almost anything with low end... I assumed it was the speaker itself, the coil rubbing on the magnet, but one day a couple decades later I had to open it up for something else (i forget what) and whiel it was open it did not have the buzz. :? So for a while after that I just left the screws loose and a piece of foam shoved between the cse halves in one place. Eventaully I found the buzz cause, IIRC it was one of hte screw standoffs or something in the plastic front bezel, and fixed it and now no buzz. Two things came from that for me, both of which I already "knew"--the first was not to assume what a problem is (wish I'd thought to check for the obvious cracked plastics right after it happened; wouldn't have had to live with that buzz for so many years), and the second was to put velcro all over the bottom of the speakers and on the top of hte keybaord so they couldn't slide off. :laugh: (that part I did right away after that convention). I think once beer got splashed on stuff too, but it didn't damage anything, just left an awful smell, so I don't recall the details of that one.
-
Before you get anything from that company, be aware that spitfire's garbage is going to autoupdate and break your ability to create things, over and over, and you'll waste hours having to relearn the interface, redownload all your content because it can't find it or thinks it's broken when it's not, etc. Avoid this company for your sanity.
-
In my ancient pre-bandlab SONAR it shows three notes in a single clip in either TV, TV-PRV, or PRV.
-
At the *start* of the tracks, from what the post title says. I've heard this myself occasionally in just live playback when I stopped playback when there was (presumably) some large difference in where the waveform ended when i stopped and (before the fx could decay out) I immediately hit rewind and play to start from the beginning where it was silent. But I haven't had it in an export. (keeping mind that i am not using a recent version of CW or S).
-
Does it help to toggle the "Play effects tails after stopping" off, then back on again, just before the export?
-
How to remove automatic underscore in saved file name
Amberwolf replied to Rickddd's topic in Cakewalk Sonar
I think that's related to or the same as this -
Thanks! I swapped out the sonitus eq for lp64 eq to calm down the 35hz and lower by 24db. (is only 6db at 35hz, rolloff starts at 50hz AFAICT on the scale (hard to tell on it's gui), presumably is down by 24db by the time it hits 0hz. It definitely makes a huge difference in the VSPAN display down there, and I can tell there is some difference in the feel, even though my system can't reproduce all that stuf down there. I tried a bunch of things with the sonitus eq after this to match it up to the LP64, but nothing I can do with it makes it cut out the low end unless I start way up in the 100hz+ range, which negatively affects everything. So....how does it sound now? Too much low end cut out? Not enough? I haven't used CVS...but a quick google on versions looks like it's another of their acquire-and-rename products (like the ex-JASC Paintshoppro v5 that I use that was bought out by corel and transformed into whatever it might be nowadays). Used to be Ulead Videostudio, which I did use back in the 90s or maybe early 00s? But I don't recall much about it. Thanks for the idea. Must be a feature of newer versions than I have on my old Win10 system. I'll have to look at the newer version taht's on one of the HP server racks I have awaiting conversion to a new DAW and see if they can do it.
-
Is it lame music cartoon time again? I think "yes".
Amberwolf replied to Notes_Norton's topic in The Coffee House
Maybe they just want that good ol' wire rattle and transformer hum to give it character..... -
Is there an intelligent eq that can match diff clips on a vocal track?
Amberwolf replied to T Boog's topic in Cakewalk Sonar
Do you mean an expander (or compander)? (sorry I don't know all the terminology)