Jump to content

Amberwolf

Members
  • Posts

    625
  • Joined

  • Last visited

  • Days Won

    1

Amberwolf last won the day on February 23

Amberwolf had the most liked content!

Reputation

238 Excellent

1 Follower

Recent Profile Visitors

6,878 profile views
  1. Is it being bent by CC events or by an automation envelope?
  2. I've found some of the AI imagery outputs I've gotten to be wierd and funny (some of them actually disgusting or repellent), and while I found it fun in the beginning, it's grown frustrating because the programs dont accept common terminology required to tell them what output is actually wanted, and because there's no iterative editing capablity, they all just generate a whole new output based on the updated prompt, so it's not possible to just fix the problems with something that is almost perfect; it just starts over every time, always with something wrong. It's still fun to first see what wierdness it generates from a prompt though. Despite having to manually edit the output I get, I still use it for my album "art" and a few other things. Someday I'd love to create a video like you have, for the various stories of my songs, if there's ever time. Which specific tools / sites did you use; I could at least try them out....
  3. What happened just before this started? Was there an install? Update? Etc? Was there an uninstall? Was there a change in environmental conditions? Weather? Power? Hardware? Cabling?
  4. Is that the *only* MIDI device in either category? What are the other devices listed there? Do they have MIDI functions? What do you see if you select "show hidden devices" in the menus up top?
  5. Thanks--that's better info, though I don't want the choirs, just the individual singers like Laurie and at least some of the others in the Seven Solo Voices bundle. (ok, I could *use* the choirs, too, but....)
  6. In Windows Device Manager, what MIDI devices show up, in the Software Devices category, and in the Sound, Video and game controllers category?
  7. I just wish 8dio had all their libraries (at least the vocal ones) in Soundpaint, but there are several that are only in the other format (kontakt? don't remember) that I can't use, with no plans / timeline on SP versions. Also, the SP versions are not on the same sales as the other versions, generally the ones I want are not on the sale except in the version I can't use; they said that even though they're the same company and team, that since they are separate sites they have separate promotions. They've had sales that I could've skipped enough groceries / household stuff for a few months to get the SP versions of them if they had an SP version and if they were on the sale, but...they can't have my money because they don't and they won't be.
  8. If it sounds normal after you stop recording and are just playing back, then you probably have your audio interface's direct monitor turned on. Some of them this is a button on the front panel, like on my Avid USB Pro Duo If this is turned on, then you will hear the input *twice*, once thru the DM function in the interface, and again thru the input monitoring within the recording program's track controls. If you're not monitoring with effects in the track you can turn off IM in the track, and then you can use the DM of the interface as a "zero latency" monitor of your recording. If you are monitoring with effects in the track, leave IM on and turn DM off.
  9. Do you think people would look for and read a sticky better than a dozen threads with it in the title (some of which also say trouble, problem, etc)? (or the many threads / posts mentioning and then discussing ASIO4All...) (sorry; just have seen this issue with pretty much any commonly-posted topic on any forum I've ever been to).
  10. Ditto. Could be the greatest sounds on earth, but that company...nevermind.
  11. Thanks! That means I can probably use them with CONNCW.exe https://www.kvraudio.com/forum/viewtopic.php?p=8994597#p8994597 to use them as clips in SONAR.
  12. FWIW, its the same way I make stuff that uses vocals or real instruments, etc. Anything I can't draw midi notes for, and play from a synth I already have, I have to find or make or build or record audio clips of, then manipulate those to roughly approximate what I would *actually* put there if I could sing or play that instrument, etc. (well, if I could play *any* instrument). I even end up doing this for the output of the synths--theere are many things not controllable via MIDI for them, or stuff that can't be done without editing the instrument itself to do those things and then running two copies of the synth (original plus edited), sometimes three or four to get the different versions.... So I end up rendering the synth out and manipulating the audio clip to do ti isntead, which doesn't necessarily give me exactly what I want either, but is far faster and simpler, and lets me get on with the other bits of stuff I want to create. (that's the real problem with tools to create things--they have to be designed to make that creation easy and fast, to stay out of the way of the creative process and let the artist do what the artist needs to get done while the idea is still there...it's not like a mechanical process that has specific steps that must be performed in a certain way every time, like machining something on a lathe, so you know the exact things you must do and if you are interrupted you can always start from where you left off...creativity doesn't work like that). It's not so much "smarter" as "better designed for the purpose". The tools can only learn the types of things they're programmed to. If they aren't setup to learn things specific to the task at hand, they won't improve their ability to do the task, and that's the problem right now. Most of these things are built as LLMs, large language models, whcih you can look up the specifics of how they work, but basiclly they take a huge database of "input' and then train the model with that data to create patterns in it's behavior. Then while in use by a user they may also learn things specific to that user or that task, but they are not human and cannot learn like a human can, where you can just explain to them what they're doing wrong and how they should do it correctly, and fix a problem that way. I don't yet know enough to even begin creating one, but eventually my wolfy-bot project will use a type of one of these things to control it's behavioral patterns so that it can learn like a dog or wolf does, from interactions with the user, and can be trained like a real canine can, based on "instincts" that are already preprogrammed into it that "reward" it for correctly learning something, for instance. The catch with present versions of any of these systems is that they are complex black boxes--even the programmers have no real idea of what is going on inside them to take a specific set of inputs and cause a specific set of outputs. So there is no way to go in and edit a "behavior" in any of them. If it really learns something "wrong" that you don't want in the model, you'd have to erase the whole thing and start over from a backup of the model from before it learned that. There should be a way to backup and insert specific...well, I'll call them engrams, or behavioral routines, but at present i'ts like our own brains--we don't know which "neurons" and paths between them are actually used for any specific behavior or bit of "knowledge', etc. If a model is designed for it, you could retrain it to do a different behavior for an input but just like in actual brains the old paths and data are still there, so if they are triggered by a specific set of inputs and conditions, the old behavior could still be used even though you never want that to happen. At their cores, you could think of these paths like a giant slanted table with dimples and bumps in it that you roll balls down, so that the balls are deflected down paths that have deeper "channels" more often than shallower ones, and steered away from paths that have higher bumps. Every reinforcment of a path increases the size of the dimples and bumps on the way to and thru that path. But...you can't see any of those bumps or paths from outside, can't see where the ball actually goes while on the table, so you can't note down pathways and manually change them, can't back them up or copy them individually to a different table, etc. You can also artifically alter the bumps or paths, but not knowing which specific ones they're for you can only make general changes (kinda like the sliders on the attitude/ability control tablets for the AIs in Westworld). You can copy an entire whole AI database of behaviors, but not pick and choose individual ones. That's something that I know *could* be changed, but hasn't been yet (and I am not a programmer so I don't know how, but there's no reason the system couldn't be designed to be able to do this--it just hasn't been yet).
  13. The thing that really has to happen is for the "programmer mentality" (I don't know a better phrase, but if you do I can replace this with it) to go away, and be replaced with the "serve the user" mentality. Virtually every programmer (individuals or companies) I've ever attempted to work with or suggest things to has a "vision" of what they want a program to do, but that almost never coincides with what a user of that program actually needs it to be able to do, or the way the user needs to be able to interact with it, and when it conflicts, the user virtually always loses. There are individual feature exceptions here and there in some programs, but as a general rule it prevents every piece of software out there from being able to successfully or fully serve the purposes to which users need to put it. Some software is much much worse than others, and grows less usable with every iteration despite more features being added. Some software decisions are probably controlled by marketing or whatnot for stuff that's for sale, but even with opensource free stuff tthe same issues arise (though at least there, when they say "go fork yourself" it means something a bit more useful :laugh: ) . I've attempted to communicate feedback to all of the developers of the AI tools I've tried out, with zero response from any of them yet. (some of the tools don't have a way to provide feedback). The Google labs "image fx" tool *has* changed to include a version of some parts of some of the feedback I've given (but I'm certain that many others have given the same feedback), but the version of the bits they did include don't fix anything, and don't make it any more useful--some of it actually makes it harder to use with less predictable output, which makes it less useful overall. And none of the most important things have changed at all--it still doesn't understand even the most very basic bits of terminology, and doesn't follow what it sometimes knows consistently, so you still can't predict what you will get out of it, which makes it far less useful as a real tool, and like most of these things, more of a gimmick.
  14. I don't suppose anybody knows if this is one of the few libraries built with a sample folder full of sound files, or it's a monolithic kontakt-format soundfile? (asking first before I download over 3gb just to find out) I'll never have the full Kontakt version to use it, but if it has the individual sounds available I can still use those to create music (or whatever ) with, as I do with other sound libraries and sample sets.
  15. FWIW, I wouldn't be able to use any of the output of these tools at all, except that I'm pretty good at "macgyvering" stuff (of any kind) together out of whatever I have available (a skill learned out of necessity from a young age and honed over the decades to a finely invisible edge ). So like everything else I do, I take the bits it spits out that I see potential in, then chop them up and mix them with each other and other things and my own direct input and get something resembling what I would create if I actually had everything I *really* needed to make it.
×
×
  • Create New...