Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

220 Excellent

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The problem is that you generally have no way of knowing what "enhancements" are going to be applied, or what listening environment you are sending your music into, or ultimately how the listener will perceive it based on the limitation of his own hearing. Not everyone listening will be doing so using the same software or hardware. If you know that you will only be presenting music on a single streaming service, it is worth investigating what "enhancements" will be applied by that service, but otherwise you are probably best advised to mix and master the recordings using as few gimmicks as possible, and making sure that any color that is being applied as you listen is occurring prior to recording in your signal chain. Mixing using an "enhanced" listening environment will only produce the results you expect in that same environment on playback. The listener can tweak his listening experience using the enhancements he has available, which are mostly filters of one sort or another. In practice even "boosts" at certain frequencies are often passing a band in the boosted region followed by increasing the volume of the surviving bands overall. If the frequencies are not in the native recording they can not be effectively manufactured by the listener's equipment.
  2. Oh, great. Now we will have to re-master Paul Robeson's "More Than Middle Aged Heavily Complexioned Joe."
  3. Headphone output impedance for your interface is 10 ohm with a recommended load/headphone input impedance of 80 ohm derived from a recommended ratio of 8:1. Likely, as you noted, your problem is that the headphones are under-driven by the source. Interposing a headphone amplifier to boost the signal to the phones would likely be the best option if you want to maintain fidelity. Or getting phones with less impedance. You will probably have your interface cranked up to maximum volume already. Increasing the digital signal coming into the interface will give you limited benefit. https://solidstatelogic.zendesk.com/hc/en-us/articles/360009315238-Headphones-impedance https://www.headphonesty.com/2019/04/headphone-impedance-demystified/
  4. Is this my lifetime or CloudBounce's lifetime. I am pretty old, but I have already outlived many thousands of internet enterprises.
  5. There is also some confusion added to the discussion by the "albums-in-the-hope-chest" phenomenon. Typically a young artist will come to be recognized after he has been working for a decade or more without public success. Then he hits the audience with the best stuff he has created over that long time period, and everyone thinks he is brand new at this, and marvels at his creative productivity. A second or third album comes out with some of his lesser but still impressive songs, that have been sitting on ice for years sometimes, and the image of a huge amount of creative energy personified in an up and coming talent is solidified. As often as not though after he has used up his best stuff from storage, his creativity seems to lag, because he is not putting out such great work every six months. Once he starts to go a couple of years between releases, he is already judged to be a has-been who can no longer create new stuff, and that is just if he has not succumbed to drug abuse or the loss of motivation and deflections from his meteoric rise that comes with new distractions that often accompanies "success."
  6. Depending on how much you want to stretch the audio, there may be better stretch algorithms that will work better, but there are limits to the amount you can stretch. Remember that if you are stretching audio you are trying to fill more time with the same amount of data--some data will be lost of necessity. An algorithm that will fill in the missing data points must use some sort of interpolation; it must "guess" at what the complex wave was going to be doing in the now empty samples. The more empty samples it has to compensate for, the less it is likely to be correct.
  7. General MIDI controller pads are not just for, or even primarily for, triggering drums. They are probably most useful for the one man band who needs to trigger chunks of pre-recorded musical sounds in live performance. If you are really looking at putting live controlled drum sounds into the DAW then a dedicated drum controller may be a better choice. If you are a drummer, then an electronic drum kit would be even better and of course much more expensive.
  8. Have you tried this? https://www.roland.com/us/support/by_product/fa-07/updates_drivers/e830c4fa-dd77-40da-ba6f-7c886c664cd4/
  9. Win 10 end of support is coming October 14, 2025. If you are buying a new computer, take the time to verify that it will run Windows 11. You cannot assume that any machine you find for sale as "new" will have the capability of running Win 11. There are machines that have not been sold yet that will not, and some of these will look like super bargains because the seller is trying to dump them before they spoil. Of course if you are planning to stay offline and run a defunct OS for another decade, one of these "old" new machines might be what you want. In that case waiting is likely to only bring the price down as buyers become more aware of the problem they will face in four years.
  10. Just to be clear, you are talking about MIDI velocity and not how "loud" (dB on the meter) the audio response is? If the lowest velocity visible in the Event List view is fixed, then 57 Gregy has it right. My keyboard lets me filter/adjust the values that it sends in response to the physical keypress impulse, so check to see that it is not a setting you can change. Inexpensive keyboards may just lack the sensitivity discrimination to trigger and send low velocity messages, or may not, by design limitation, send the full range available in the MIDI spec. If you are talking about the response of the audio dB level to a keypress on the controller, then that opens a whole other can of worms.
  11. MIDI files can include metadata text events which can be used to include text that will not be executed by all MIDI capable sequencers. That capability could be used to embed information like track names, events for specific sequencers or applications, copyright notices etc. and presumably could include the kind of information usually found in MP3 tags, but their location and content is not specified, so extracting that data from the file would require specifically looking for it.
  12. Have you checked under the cushions on the couch?
  13. Somebody who does not know anything about the offerings of products on the market may well take the fact that software has won an award or topped a poll as indicating that it is the one to buy. It is not a trivial issue, or I doubt that KVR would be bothering.
  14. And if you export to wave? If the crackling is not encoded into the audio data, the cause and solution are moot. It would point to a problem in the signal chain through which you are listening or to the playback system in Cakewalk.
  • Create New...