Jump to content

Amberwolf

Members
  • Posts

    712
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Amberwolf

  1. I don't think this would make a difference bounced or not bounced, as it doesn't happen to me, but: Not sure if it applies to the newer versions, but in my old SONAR, using the advanced options in the paste dialog, it has: what to do with existing material: -blend old and new -replace old with new -slide over old to make room I always leave it on the first option, as I almost never need to do either of the others.
  2. Long ago when I wanted a theremin sound, I used Mysteron by FxPansion. I had a touch screen monitor back then, which made it pretty fun to mess around with. But you could also use a joystick-to-MIDI adapter and "driver" program like this https://www.fergonez.net/softwares/fjoymidi or others easily found in a websearch. Or a joystick-to-mousepointer program like this https://www.imgpresents.com/joy2mse/j2m.htm
  3. well, it's better than my guitar playing, and the instrument is better than my electric guitar, a modified firstact i found at goodwill... (i don't use it much once i got a used ibanez 6 string bass that i can lay on my lap to play using the closest frets tot he bridtge for guitar parts and then the normal frets for bass stuff but i can only play one or two strings at at ime. ).
  4. Hmm...I'd never heard of it till now, but: https://forum.modartt.com/viewtopic.php?id=5897
  5. Look up Real Folk Blues by the same group; y ou'll probably like that too. She also worked with Origa for the Ghost In The Shell : Stand Alone Complex theme (one of my favorite pieces), and at least some of the in-series music.
  6. which is why i gave up on sineplayer; every update broke all of my sounds, forcing redownload of gigabytes of them even though there's nothing different about them and the content and everything else is still there...just the updated version can't find it's *** with both hands.... And if I *don't* update, then they randomly decide to change the structure of their server site which somehow causes sineplayer to not be able to find my content, even though its' on MY system, not theirs, and should have nothing to do with their website...but it does. Their fix? Have me install the update, which still breaks all my stuff and forces me to still have to redownload everything. Screw that. So...I'd rather NOT have updates; I'd rather be able to just sit down and make music whenever I feel like it. .
  7. There's a bunch of ways to do backups, but doing a periodic full backup, then regular (frequent) incremental "difference" backups is probably the quickest. The only catch with them is that if you have a total failure, you have to restore the image, then each of the difference backups, to get back to where you were just before the failure. If the difference backups are not the difference from the last backup, but the idfference from the *full* bakcup, then you only need to restore the last difference bakcup after restoring the full one.
  8. Makes me think of Yoko Kanno And The Seatbelts, here playing Tank (theme to cowboy bebop); she's just conducting here rather than playing or singing.
  9. If it's a VST instrument, intalled in your default VST folder, then you can just run a new scan to detect it and it wil then show up in the list of VST (or VST3) instruments, when you choose that in the left column. IF it's not installed in your default VST folder, you'll need to click the VST Configuration options button and add the folder it is installed in to the list, then run a new scan to detect it.
  10. Drag the clip from the track to your desktop (or other Explorer window), and it should work. (does in my ancient version, expect it still should in modern ones?) Note that when you do this it may take it out of the project (dunno why) so then just use Ctrl-Z / undo to put it back. When I put them into a project I don't use import I just drag and drop them where I want them.
  11. This thread is confusing....if the licenses will be deactivated so soon, what would be the point of having them at all? (that phrase usually means that software is designed to phone home and then stops working, in my previous experience).
  12. Is it being bent by CC events or by an automation envelope?
  13. I've found some of the AI imagery outputs I've gotten to be wierd and funny (some of them actually disgusting or repellent), and while I found it fun in the beginning, it's grown frustrating because the programs dont accept common terminology required to tell them what output is actually wanted, and because there's no iterative editing capablity, they all just generate a whole new output based on the updated prompt, so it's not possible to just fix the problems with something that is almost perfect; it just starts over every time, always with something wrong. It's still fun to first see what wierdness it generates from a prompt though. Despite having to manually edit the output I get, I still use it for my album "art" and a few other things. Someday I'd love to create a video like you have, for the various stories of my songs, if there's ever time. Which specific tools / sites did you use; I could at least try them out....
  14. What happened just before this started? Was there an install? Update? Etc? Was there an uninstall? Was there a change in environmental conditions? Weather? Power? Hardware? Cabling?
  15. Is that the *only* MIDI device in either category? What are the other devices listed there? Do they have MIDI functions? What do you see if you select "show hidden devices" in the menus up top?
  16. Thanks--that's better info, though I don't want the choirs, just the individual singers like Laurie and at least some of the others in the Seven Solo Voices bundle. (ok, I could *use* the choirs, too, but....)
  17. In Windows Device Manager, what MIDI devices show up, in the Software Devices category, and in the Sound, Video and game controllers category?
  18. I just wish 8dio had all their libraries (at least the vocal ones) in Soundpaint, but there are several that are only in the other format (kontakt? don't remember) that I can't use, with no plans / timeline on SP versions. Also, the SP versions are not on the same sales as the other versions, generally the ones I want are not on the sale except in the version I can't use; they said that even though they're the same company and team, that since they are separate sites they have separate promotions. They've had sales that I could've skipped enough groceries / household stuff for a few months to get the SP versions of them if they had an SP version and if they were on the sale, but...they can't have my money because they don't and they won't be.
  19. If it sounds normal after you stop recording and are just playing back, then you probably have your audio interface's direct monitor turned on. Some of them this is a button on the front panel, like on my Avid USB Pro Duo If this is turned on, then you will hear the input *twice*, once thru the DM function in the interface, and again thru the input monitoring within the recording program's track controls. If you're not monitoring with effects in the track you can turn off IM in the track, and then you can use the DM of the interface as a "zero latency" monitor of your recording. If you are monitoring with effects in the track, leave IM on and turn DM off.
  20. Do you think people would look for and read a sticky better than a dozen threads with it in the title (some of which also say trouble, problem, etc)? (or the many threads / posts mentioning and then discussing ASIO4All...) (sorry; just have seen this issue with pretty much any commonly-posted topic on any forum I've ever been to).
  21. Ditto. Could be the greatest sounds on earth, but that company...nevermind.
  22. Thanks! That means I can probably use them with CONNCW.exe https://www.kvraudio.com/forum/viewtopic.php?p=8994597#p8994597 to use them as clips in SONAR.
  23. FWIW, its the same way I make stuff that uses vocals or real instruments, etc. Anything I can't draw midi notes for, and play from a synth I already have, I have to find or make or build or record audio clips of, then manipulate those to roughly approximate what I would *actually* put there if I could sing or play that instrument, etc. (well, if I could play *any* instrument). I even end up doing this for the output of the synths--theere are many things not controllable via MIDI for them, or stuff that can't be done without editing the instrument itself to do those things and then running two copies of the synth (original plus edited), sometimes three or four to get the different versions.... So I end up rendering the synth out and manipulating the audio clip to do ti isntead, which doesn't necessarily give me exactly what I want either, but is far faster and simpler, and lets me get on with the other bits of stuff I want to create. (that's the real problem with tools to create things--they have to be designed to make that creation easy and fast, to stay out of the way of the creative process and let the artist do what the artist needs to get done while the idea is still there...it's not like a mechanical process that has specific steps that must be performed in a certain way every time, like machining something on a lathe, so you know the exact things you must do and if you are interrupted you can always start from where you left off...creativity doesn't work like that). It's not so much "smarter" as "better designed for the purpose". The tools can only learn the types of things they're programmed to. If they aren't setup to learn things specific to the task at hand, they won't improve their ability to do the task, and that's the problem right now. Most of these things are built as LLMs, large language models, whcih you can look up the specifics of how they work, but basiclly they take a huge database of "input' and then train the model with that data to create patterns in it's behavior. Then while in use by a user they may also learn things specific to that user or that task, but they are not human and cannot learn like a human can, where you can just explain to them what they're doing wrong and how they should do it correctly, and fix a problem that way. I don't yet know enough to even begin creating one, but eventually my wolfy-bot project will use a type of one of these things to control it's behavioral patterns so that it can learn like a dog or wolf does, from interactions with the user, and can be trained like a real canine can, based on "instincts" that are already preprogrammed into it that "reward" it for correctly learning something, for instance. The catch with present versions of any of these systems is that they are complex black boxes--even the programmers have no real idea of what is going on inside them to take a specific set of inputs and cause a specific set of outputs. So there is no way to go in and edit a "behavior" in any of them. If it really learns something "wrong" that you don't want in the model, you'd have to erase the whole thing and start over from a backup of the model from before it learned that. There should be a way to backup and insert specific...well, I'll call them engrams, or behavioral routines, but at present i'ts like our own brains--we don't know which "neurons" and paths between them are actually used for any specific behavior or bit of "knowledge', etc. If a model is designed for it, you could retrain it to do a different behavior for an input but just like in actual brains the old paths and data are still there, so if they are triggered by a specific set of inputs and conditions, the old behavior could still be used even though you never want that to happen. At their cores, you could think of these paths like a giant slanted table with dimples and bumps in it that you roll balls down, so that the balls are deflected down paths that have deeper "channels" more often than shallower ones, and steered away from paths that have higher bumps. Every reinforcment of a path increases the size of the dimples and bumps on the way to and thru that path. But...you can't see any of those bumps or paths from outside, can't see where the ball actually goes while on the table, so you can't note down pathways and manually change them, can't back them up or copy them individually to a different table, etc. You can also artifically alter the bumps or paths, but not knowing which specific ones they're for you can only make general changes (kinda like the sliders on the attitude/ability control tablets for the AIs in Westworld). You can copy an entire whole AI database of behaviors, but not pick and choose individual ones. That's something that I know *could* be changed, but hasn't been yet (and I am not a programmer so I don't know how, but there's no reason the system couldn't be designed to be able to do this--it just hasn't been yet).
  24. The thing that really has to happen is for the "programmer mentality" (I don't know a better phrase, but if you do I can replace this with it) to go away, and be replaced with the "serve the user" mentality. Virtually every programmer (individuals or companies) I've ever attempted to work with or suggest things to has a "vision" of what they want a program to do, but that almost never coincides with what a user of that program actually needs it to be able to do, or the way the user needs to be able to interact with it, and when it conflicts, the user virtually always loses. There are individual feature exceptions here and there in some programs, but as a general rule it prevents every piece of software out there from being able to successfully or fully serve the purposes to which users need to put it. Some software is much much worse than others, and grows less usable with every iteration despite more features being added. Some software decisions are probably controlled by marketing or whatnot for stuff that's for sale, but even with opensource free stuff tthe same issues arise (though at least there, when they say "go fork yourself" it means something a bit more useful :laugh: ) . I've attempted to communicate feedback to all of the developers of the AI tools I've tried out, with zero response from any of them yet. (some of the tools don't have a way to provide feedback). The Google labs "image fx" tool *has* changed to include a version of some parts of some of the feedback I've given (but I'm certain that many others have given the same feedback), but the version of the bits they did include don't fix anything, and don't make it any more useful--some of it actually makes it harder to use with less predictable output, which makes it less useful overall. And none of the most important things have changed at all--it still doesn't understand even the most very basic bits of terminology, and doesn't follow what it sometimes knows consistently, so you still can't predict what you will get out of it, which makes it far less useful as a real tool, and like most of these things, more of a gimmick.
×
×
  • Create New...