• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Sorana Flow — free macOS music player with DSP (feedback welcome)

thank you for myHRTF/libmysofa, which Sorana Flow uses for HRTF crossfeed processing!
Just to avoid misunderstandings, I'm not the author of libmysofa.

Not sure how far you want to go with the integration of SOFA files. Typically SOFA contains the direct wave without any room reflections, hence music convolved with IR from SOFA files sounds like listening in an anechoic chamber which is not what we want. Hence, you need to create room reflections using the SOFA file, and this tends to be a new rabbit hole if you want to do it properly...
 
Just to avoid misunderstandings, I'm not the author of libmysofa.

Not sure how far you want to go with the integration of SOFA files. Typically SOFA contains the direct wave without any room reflections, hence music convolved with IR from SOFA files sounds like listening in an anechoic chamber which is not what we want. Hence, you need to create room reflections using the SOFA file, and this tends to be a new rabbit hole if you want to do it properly...
@myHRTF Thanks for the correction — I'll update the credit. And that's a great point about SOFA files and anechoic limitations.

You're right that raw SOFA HRTFs give you the direct path without room reflections, which can sound unnaturally dry. For now, our HRTF implementation is intentionally basic — we load SOFA files via libmysofa and apply them as-is, which works well enough for basic binaural rendering with headphones.

Adding synthetic early reflections or a room model on top of the HRTF is definitely interesting but, as you said, a rabbit hole I'm not ready to dive into yet. For now I'd rather keep it simple and let users combine HRTF with our convolution engine if they want to layer in room characteristics from a separate IR.

Appreciate the insight — your feedback on the upsampling reorder already saved us 4x CPU overhead, so I always pay attention when you chime in.
 
we load SOFA files via libmysofa and apply them as-is, which works well enough for basic binaural rendering with headphones.
In this case I recommend to also allow for SOFA files that contain room reflections, i.e. SOFA files with long IR (min. 16k [email protected] kHz, the more the better).

Typically, SOFA files are used in combination with head-tracking, hence you may want to consider the support of head-trackers. But first things first of course.
 
In this case I recommend to also allow for SOFA files that contain room reflections, i.e. SOFA files with long IR (min. 16k [email protected] kHz, the more the better).

Typically, SOFA files are used in combination with head-tracking, hence you may want to consider the support of head-trackers. But first things first of course.
@myHRTF Great suggestions — thank you.

1. Long IR support — I'll check if our SOFA loading has a length limit. Supporting longer IRs (16k+ samples) that include room reflections makes sense and should be straightforward if it's not already handled.

2. Head-tracking — definitely interesting for the future, but as you say, first things first. Getting the basics solid before adding spatial tracking.

I appreciate the practical, incremental approach. This kind of expert feedback is invaluable.
 
For those who are taking this work seriously: Please note that OP has no background in coding whatsoever. All of his work - including the application, website, and even the responses in this post's Q&A - has been generated entirely by AI with no proper review. (Notice his pronoun usage)

OP has a history of being banned from a Korean audio community for the same behavior. He claimed to have developed a program similar to Dirac and shared the code; however, the program was non-functional, and it was evident that OP did not understand the code he provided at all.

I believe that anyone intending to provide feedback or contribute to this work should be informed about this.
 
Last edited:
For what it's worth - I've had private dialogue with the developer that certainly wasn't AI generated. I don't care how software comes into being or if AI is being used to create it as long as it works in the end. He's admitted he's young and, as such, has limited life experience and reference to the bigger audio world history and trends. We all start somewhere....

I'll keep trying to help at any rate to see where it goes.
 
For those who are taking this work seriously: Please note that OP has no background in coding whatsoever. All of his work - including the application, website, and even the responses in this post's Q&A - has been generated entirely by AI with no proper review. (Notice his pronoun usage)

OP has a history of being banned from a Korean audio community for the same behavior. He claimed to have developed a program similar to Dirac and shared the code; however, the program was non-functional, and it was evident that OP did not understand the code he provided at all.

I believe that anyone intending to provide feedback or contribute to this work should be informed about this.
As I have clearly stated before, the Dirac-like project was an old test — built 100% with AI, without any real understanding on my part. I've never tried to hide that, Even every time I posted about that, I clearly told aboout that

However, the current project (Sorana Flow) is a different story. Yes, AI is partially used as a tool, but I personally study and understand the architecture before implementing anything — including the audio core, DSP chain, and FFmpeg integration. I do not blindly accept AI-generated code.

My primary goal is to learn, experiment, and share my progress openly.

Regarding the Korean community, I faced continuous harassment simply for using AI at all, with no constructive feedback. That's the real reason.

If an AI-assisted learning project isn't for you, you're free to ignore it. But I always welcome genuine feedback from anyone willing to help.
 
Last edited:
For what it's worth - I've had private dialogue with the developer that certainly wasn't AI generated. I don't care how software comes into being or if AI is being used to create it as long as it works in the end. He's admitted he's young and, as such, has limited life experience and reference to the bigger audio world history and trends. We all start somewhere....

I'll keep trying to help at any rate to see where it goes.
Thank you so much for your kind words and support. It means a lot to me. I'm learning a lot through this project and it's incredibly helpful for me. I will keep doing my best to improve this project. Thanks!
 
I think there will be no update like adding new features this week. I am planning to fix all the bugs first and do refactoring after that. The app will be much stabilized after that.
 
I think there will be no update like adding new features this week. I am planning to fix all the bugs first and do refactoring after that. The app will be much stabilized after that.
FWIW, I ran into the crash when cliking on Albums again. Deleting the directory in Application Support fixed it again but it also deleted all the other things I had set up, so it's pretty damned annoying.
 
FWIW, I ran into the crash when cliking on Albums again. Deleting the directory in Application Support fixed it again but it also deleted all the other things I had set up, so it's pretty damned annoying.
Thanks for bug report! And sorry for inconvenience! I'll fix most of them by this week, you'll be really happy to use my app with no errors nor bugs!
 
Last edited:
This is getting ridiculous.
42 updates in just three weeks, yet the program is still unstable even with a task as basic as clicking to open albums.

While I admire your bold ambition in developing a music player,
I suggest that you should thoroughly test your software before releasing it to the public.
The audio community members are not your beta testers.
 
This is getting ridiculous.
42 updates in just three weeks, yet the program is still unstable even with a task as basic as clicking to open albums.

While I admire your bold ambition in developing a music player,
I suggest that you should thoroughly test your software before releasing it to the public.
The audio community members are not your beta testers.
Thank you for the honest feedback, @JBMLEE — you raise a fair point.

You're right that 42 releases in three weeks is excessive from a user's perspective. What felt like "rapid iteration" on my end clearly came across as instability on yours, and that's on me.

Regarding the album-opening issue — could you share the details (file format, library size, any error messages)? I'd like to investigate and fix it properly rather than rushing another patch.

Going forward, I'm shifting to a more disciplined release cycle: fewer, better-tested updates rather than pushing fixes as soon as they're written. The current unit test suite covers 190+ tests across 7 modules, but that clearly isn't catching enough edge cases in real-world usage. I'm expanding test coverage, particularly around library operations.

I appreciate the directness. Building a quality audio player demands higher standards than what I've been shipping, and feedback like this helps me get there.
 
This is getting ridiculous.
42 updates in just three weeks, yet the program is still unstable even with a task as basic as clicking to open albums.

While I admire your bold ambition in developing a music player,
I suggest that you should thoroughly test your software before releasing it to the public.
The audio community members are not your beta testers.
First rodeo, huh?

Look, I've been involved in testing software for 30+ years now. It's always like this at the beginning and internally. The only difference here is that this guy is doing it publicly and letting yall who don't normally get involved this early have a crack at it. I think the only possible issue is that he maybe didn't initially make it clear to you that he's pushing alpha builds out publicly, which created a mistaken impression for you that this would be the same experience as you're used to with mature, commercially published software.

Now I have my concerns about the whole vibe coding thing. Eventually, he's going to run into a bug that the AI won't patch for him and he's going to have no idea what to do about it. But the pace of releases is entirely normal for this stage of development.
 
First rodeo, huh?

Look, I've been involved in testing software for 30+ years now. It's always like this at the beginning and internally. The only difference here is that this guy is doing it publicly and letting yall who don't normally get involved this early have a crack at it. I think the only possible issue is that he maybe didn't initially make it clear to you that he's pushing alpha builds out publicly, which created a mistaken impression for you that this would be the same experience as you're used to with mature, commercially published software.

Now I have my concerns about the whole vibe coding thing. Eventually, he's going to run into a bug that the AI won't patch for him and he's going to have no idea what to do about it. But the pace of releases is entirely normal for this stage of development.
Like you said, I just faced a bug that even AI cannot solve by itself, I'm analyzing everything related to the bug by myself to solve it.
 
So please explain it to me. I have no idea what this vibe coding is all about. I have many years experience in good old fashioned C code in embedded DSP systems. But that ended in the early 2000s.

How do you tell the AI what you want it to do? Write a specification for it? Speak to it nicely? Tell it what libraries to use (or not use)?

So when this AI generates code, can you then see (read) what it has generated?

What language is the code generated in?

Does it just generate source code which you then have to compile? Does it also do the compiling to object code?

Can you step through and trace code execution in what it has generated? View variables? Look at data tables?

What kind of debugging support is provided?

In fact, what is the environment that you are doing the development in? Is it in the form of an IDE (integrated development environment)? Xcode?

Is it all just a series of black boxes?

So many questions...

My daughter is a senior Machine Learning software engineer working for Apple. She does not have a high opinion of vibe coding.
 
My daughter is a senior Machine Learning software engineer working for Apple. She does not have a high opinion of vibe coding.

Listen to your daughter.

One other thing. Could you have her talk to the OS knucklehead coders at Apple that are in charge of the legacy screen saver code and see if they can fix that? It's been broken since the intro of Sequoia and has yet to be fixed. Thanks.
 
Listen to your daughter.

One other thing. Could you have her talk to the OS knucklehead coders at Apple that are in charge of the legacy screen saver code and see if they can fix that? It's been broken since the intro of Sequoia and has yet to be fixed. Thanks.
I do, don't you worry. :) Her and I have had discussions about the new OS, not favorable ones. Unfortunately she's in operations, on the supply chain side. As much of a victim as the rest of us. Be happy the user interface clowns aren't in charge of order fulfillment.
 
So please explain it to me. I have no idea what this vibe coding is all about. I have many years experience in good old fashioned C code in embedded DSP systems. But that ended in the early 2000s.

How do you tell the AI what you want it to do? Write a specification for it? Speak to it nicely? Tell it what libraries to use (or not use)?

So when this AI generates code, can you then see (read) what it has generated?

What language is the code generated in?

Does it just generate source code which you then have to compile? Does it also do the compiling to object code?

Can you step through and trace code execution in what it has generated? View variables? Look at data tables?

What kind of debugging support is provided?

In fact, what is the environment that you are doing the development in? Is it in the form of an IDE (integrated development environment)? Xcode?

Is it all just a series of black boxes?

So many questions...

My daughter is a senior Machine Learning software engineer working for Apple. She does not have a high opinion of vibe coding.
Good questions — I'll try to answer them honestly.

What is "vibe coding" in this context?

I use Claude as a coding assistant, not as a black box that generates the entire app. The workflow is:
1. I write specifications and architecture decisions
2. AI suggests implementations
3. I review the overall logic and structure before integrating
4. I build, test, and debug using standard tools

The development environment:
1. Terminal-based workflow on macOS(claude code) — build, run, and debug from the command line
2. Language: C++17 with Qt6 framework
3. Build system: CMake
4. Standard debugging tools (lldb, qDebug output, sanitizers)

Does it just generate code, or compile too?

Code generation only. I compile manually via CMake, run the binary, and test. If it crashes or behaves wrong, I debug it the same way any C++ developer would.

Regarding "listen to your daughter":

That's fair criticism of vibe coding in general — and I agree with the concerns when AI-generated code isn't understood or verified. I won't claim I catch every issue before it ships; the bug reports from this community are proof of that. But the 190-unit-test suite and the fixes I've pushed based on your feedback are the actual quality signal — not the development method itself.

The honest answer: the approach has real limits, and I've hit them. I'd rather acknowledge that than oversell it.
 
Good questions — I'll try to answer them honestly.

What is "vibe coding" in this context?

I use Claude as a coding assistant, not as a black box that generates the entire app. The workflow is:
1. I write specifications and architecture decisions
2. AI suggests implementations
3. I review the overall logic and structure before integrating
4. I build, test, and debug using standard tools

The development environment:
1. Terminal-based workflow on macOS(claude code) — build, run, and debug from the command line
2. Language: C++17 with Qt6 framework
3. Build system: CMake
4. Standard debugging tools (lldb, qDebug output, sanitizers)

Does it just generate code, or compile too?

Code generation only. I compile manually via CMake, run the binary, and test. If it crashes or behaves wrong, I debug it the same way any C++ developer would.

Regarding "listen to your daughter":

That's fair criticism of vibe coding in general — and I agree with the concerns when AI-generated code isn't understood or verified. I won't claim I catch every issue before it ships; the bug reports from this community are proof of that. But the 190-unit-test suite and the fixes I've pushed based on your feedback are the actual quality signal — not the development method itself.

The honest answer: the approach has real limits, and I've hit them. I'd rather acknowledge that than oversell it.
Thank you for this prompt reply. I have a much better picture in my head now. You won't believe some of the weird pictures that were going through my head. I'd pictured much less 'human in the loop' than the process you're describing.

So is the "coding" all just pulling objects from various libraries and assembling them? Or is there some automated in-line coding going on as well? You must be pretty conversant in C++ in order to be able to tie this all together and be able to do the debugging? So you're not a total coding beginner.



So you're doing this all command line? Not using Xcode? I've played with that a little bit ("hello world").
 
Back
Top Bottom