So, what's the current status of everything?


I started working on a function

I easily wrote 48K -> 96K (which was simply repeating the each sample)

44.1K -> 48K/96K is a little trickier since linear interpolation won't work, the ratio is clearly not int-divisible therefore I can't really play with the samples

Like a good software developer I immediately used google to see if it was done or good algorithms were found before, and I found several that do not have NEON implantation, but might affect audio quality due to fast but not so precise interpolation, I found out that NE10 got some great fft modules and rfft modules, and on the frequency channel, changing 44.1Khz to 96K is going to be child-play and will result in better sound quality (and of course reversing it to time channel), if the pyra was an audiophile device, there would be no question that I'd want to fft that, since I could also apply cool filters on the frequency channel, however if I do that on the pyra, it might result in slower decoding but better than the most general way that 44.1K->96K is resolved), but by my calculation (since I don't have a pyra dev unit and I resort to emulation) it shouldn't pass 3% CPU load (and is real-time audio decoding, ergo 15ms for 1 second)


like EvilDragon already mentioned, 96K is a really high sample rate (one of the highest of generally acceptable sample rates on ARM devices), which is exactly why hardware decoding is available on such devices, however since the drivers are not ready we do want some fast decoder, so I will next week add my code for 48K->96K (really simple one) to some repo in pyragit including my tests

the question is, do we want fft -> convert -> rfft or do we want area interpolation like it seems to be generally acceptable to convert audio samples with non-linear interpolation applicable solution?
I think it might be cool to have advanced software decoding with correlation of fast hardware decoding, it might give the pyra several modes (audiophile mode/general purpose mode), but it's going to be tough to tell the audio agent to switch between 3rd party libraries and the drivers - what do you think?

P.S: the popular way to solve it is by upsampling once, than use some buffer (which is preallocated at init) that was initalized with ratio between movements of left sample and right sample as opposed to the output dimensions, and use those factors to move "n" pixels left or right on the chosen sample "chunk", each sample will be multiplied by it's factor and summed together will result the new sample, since each sample has it's own ratio there will be an output for each sample, since it's sample location based, it's easily done at init and kept there for use later, since the location and ratios are preallocated and precomputed, it's really really fast (2ms), this solution is also linear because for each sample we have in the output, we know which factors were used to result it, therefore if one sample factor was x% and the other was y%, we can simply multiply the output by x%/y% to compute the source samples (up to rounding and dot fixing due to int16 as oppsed to float operations)
 
Last edited:
Hello,
Can anyone say to me if we could heard the same 'Boiiing' with the Pyra ?
This device will be so sad without this. This make a real personality to my old Pandora.

When the answer is no, I hope something like this :
sudo apt-get install boooiinng
 
the question is, do we want fft -> convert -> rfft or do we want area interpolation like it seems to be generally acceptable to convert audio samples with non-linear interpolation applicable solution?
My interpretation was always that an fft based soltution would be too slow for general use, but I have no numbers on this. I was trying to imagine schemes to repeat samples to get 96kHz from a 44.1 input but it's hard to get anything to be too precise. I wonder now how hard it would be to have something that runs kind of in real time (or some analogue of it) and feeds the same sample over an over before the 44.1kHz input feeds in a new one, which should average out on something really precise after a few rounds, rather than having to design a repeating scheme before-hand, and the benefit is it would work with almost any input then.
 
Quick update:
The first prototype preorder has shipped and should arrive on Monday.
Four more are planned to ship on Monday - we just had a few issues flashing the OS on the internal eMMC. It flashed but it doesn´t boot so we probably have a bug in the flasher script.

I´m having a few issues with two more where some keys are not working. I fixed that successfully on a different PCB - but these two have the problem that the diode is not missing from the board - the trace has been ripped as well. See it on the second key from the left...

No idea if I can fix that, but I have some more boards left to test.
IMG_20191220_111053.jpg
 
Quick update:
The first prototype preorder has shipped and should arrive on Monday.
Four more are planned to ship on Monday - we just had a few issues flashing the OS on the internal eMMC. It flashed but it doesn´t boot so we probably have a bug in the flasher script.

I´m having a few issues with two more where some keys are not working. I fixed that successfully on a different PCB - but these two have the problem that the diode is not missing from the board - the trace has been ripped as well. See it on the second key from the left...

No idea if I can fix that, but I have some more boards left to test.

Congratulations! The first paid Pyra order has shipped! This is a great reason to celebrate!

Doing some math here... There were eight prototype pre-pre-orders. I'm number 8. There were eight in the recent picture tweeted out.
One shipped.
Four may be shipping Monday.
Two have physical issues and will be pushed out a bit.

That is seven...?

Take your time. I'm okay if mine gets pushed into the new year a bit. We have waited this long, another two weeks is statistically noise. I'm thrilled to see that one has gone out with the prospects of more to go!

Congratulations to whomever gets the first one!
 
My interpretation was always that an fft based soltution would be too slow for general use, but I have no numbers on this. I was trying to imagine schemes to repeat samples to get 96kHz from a 44.1 input but it's hard to get anything to be too precise. I wonder now how hard it would be to have something that runs kind of in real time (or some analogue of it) and feeds the same sample over an over before the 44.1kHz input feeds in a new one, which should average out on something really precise after a few rounds, rather than having to design a repeating scheme before-hand, and the benefit is it would work with almost any input then.

I kind of both agree and disagree
If my input size was assured to be multipication of the output size, I could interpolate lineary either by averaging samples ( the quickest version would be adding both and shifting right by 1 to simulate division by 2)

However our issue is that the interpolation is non linear

The ratio for 98 to 44.1 is around 2.17, it means that for each sample, I'd want to copy it once more and add "0.17" sample and write both the input and the copy sample and "fake" sample to the output buffer

Since its 2d data, its interleaving mode does not matter more than transposing the data...

So if we could add "0.17" pixel it would be a simple upsample, but there is no such thing as 0.17 sample

However image processing suffer the same issue when we resize images to different aspect ratio, linear interpolation is used to approxmiate us closer to the real size, and than we upsample by a factor between 1 and 2 (1.17 for example) with anti aliasing algorithms

Timothy Lottes from nvidia created a very fast anti aliasing algorithm (which will fix the wave corners to fit between edges, which will make sound playback seems less bumpy and more directional), namely FXAA, which marked 6.7 ms on radeon 240, which makes me confident that the pyra is probably capable of about 3ms FXAA on 44.1 to 98 per second (which is equal to 44.1K 16bits) which is roughly 88K bytes, image processing on 50K bytes on ARM omap was already benchmarked for about 2 ms, and considering how fast FXAA is, I assume that even if we cant apply FXAA we can use the idea of scoring samples and averaging tham to create sample each N samples that will result in 98K samples if done ( if i % N == 0 create extra 1 sample based on moving average of previous M samples)

I read the article about FXAA, I actually am confident it is applicable to sound waves but I might be wrong, probably next week I will discover it


the upsample twice code is something similar to this

#include <stdint.h>
#ifdef __ARM__

#include <arm_neon.h> // for neon instrincts

// assumes input sample is preallocated with sizeof(int16_t)*size
// assumes output sample is preallocated with sozeof(int16_t)*size*2
void dualsample_int16(const int16_t* input_sample, size_t size, int16_t* output_sample)
{
const int16_t* end = input_sample + size;
int16_t* y = output_sample;
const int16_t* x = input_sample;
const int16_t* end_minus_8 = end - 8;
const int16_t* end_minus_4 = end - 4;

// one increment is done already inside the loop to save time for the already computed y += 8 required to write the sample twice and to increment
for(; x <= end_minus_8; x += 8, y += 8)
{
// Load vector of 8 int16s
int16x8_t sample_data = vld1q_s16(x);
// write to y sample once
vst1q_f16(y, sample_data);
// increment y to point to the next sample
y += 8;
// write to y sample twice, now the last 32 bits of y is a repeat of those 16 bits on x
vst1q_f16(y, sample_data);
}
// same algorithm, now for vector of 4's for the rest of the data that is not divisible by 8
for(; x <= end_minus_4; x += 4, y + =4)
{
int16x4_t sample_data = vld1_s16(x);
vst1_f16(y, sample_data);
y += 4;
vst1_f16(y, sample_data);
}
// naive, for the rest of the data that is not divisible by either 8 or 4
for(; x < end; x++, y++)
{
(y++)[0] = x[0];
y[0] = x[0];
}
}

#else

void dualsample_int16(const int16_t* input_sample, size_t size, int16_t* output_sample)
{
const int16_t* end = input_sample + size;
int16_t* y = output_sample;
for(const int16_t* x = input_sample; x < end; x++, y++)
{
(y++)[0] = x[0];
y[0] = x[0];
}
}

#endif
 
Last edited:
Quick update:
The first prototype preorder has shipped and should arrive on Monday.
Four more are planned to ship on Monday - we just had a few issues flashing the OS on the internal eMMC. It flashed but it doesn´t boot so we probably have a bug in the flasher script.

I´m having a few issues with two more where some keys are not working. I fixed that successfully on a different PCB - but these two have the problem that the diode is not missing from the board - the trace has been ripped as well. See it on the second key from the left...

No idea if I can fix that, but I have some more boards left to test.

On the major plus side, the benefits of repeatable (and automated where possible) testing are abundantly clear !
 
Congratulations! The first paid Pyra order has shipped! This is a great reason to celebrate!

Heh, true. Didn't even realize that :)

Doing some math here... There were eight prototype pre-pre-orders. I'm number 8. There were eight in the recent picture tweeted out.
One shipped.
Four may be shipping Monday.
Two have physical issues and will be pushed out a bit.
That is seven...?

True :) One wanted to wait until the Display board is finished, so only 7 for now :)
Also, please note that they won't ship necessarily in order - as there are some various configurations of the PCBs the preorderers wanted, so no need to hold on EU units if those work fine and I need to fix some US boards.
Additionally, I've put the PCBs into the units and then started to fix them... I don't plan to mix and match them, so the order can be a bit different.
I think yours is included in Mondays shipment (if we can get the eMMC flash to work until then, but it should be something simple).

Congratulations to whomever gets the first one!

This was number 4... and it goes to London, UK.
I can tell that much.
Post automatically merged:

I started working on a function

Cool, it's nice seeing more helping hands here :)

Quick question: Where do you plan to include that in the end? Into the driver directly, so the driver offers multiple rates and ALSA / Pulse thinks these are supported by the hardware?
Otherwise ALSA / Pulse will always try to convert.
 
Heh, true. Didn't even realize that :)



True :) One wanted to wait until the Display board is finished, so only 7 for now :)
Also, please note that they won't ship necessarily in order - as there are some various configurations of the PCBs the preorderers wanted, so no need to hold on EU units if those work fine and I need to fix some US boards.
Additionally, I've put the PCBs into the units and then started to fix them... I don't plan to mix and match them, so the order can be a bit different.
I think yours is included in Mondays shipment (if we can get the eMMC flash to work until then, but it should be something simple).



This was number 4... and it goes to London, UK.
I can tell that much.
Post automatically merged:



Cool, it's nice seeing more helping hands here :)

Quick question: Where do you plan to include that in the end? Into the driver directly, so the driver offers multiple rates and ALSA / Pulse thinks these are supported by the hardware?
Otherwise ALSA / Pulse will always try to convert.

actually the best option is to compile a modded version of alsa/driver with my ARM optimization and have that pyra package on some ppa

But I have never done that before, lets solve the antialiasing challange first so we know we got the fastest software decoding so theres a reason at all dealing with overwriting stuff...

it might take me a while yo inject my snippet to alsa/pulse.. I fins it weird it very weird alsa/pulse does not offer some kind of way to load plugin-like software decoding as usually the decoding can be described in one C function (or even python via numpy which should be fast enough and allow for quick naive algorithm test)
 
  • Like
Reactions: rSl
I've written a short python script to transcode a 41.1k source of samples to a 96k output. It runs like this:
Code:
oldrate=41100 
newrate=96000 # newrate must be higher - this code can't downsample
with open('beep.raw','rb') as i:
    with open('beep96k.raw','wb') as o:
        sample=i.read(2)
        index=0
        while True:
            o.write(sample)
            oindex=index
            index+=1;
            if index>=newrate: index=0
            if int(oindex*oldrate/newrate) != int(index*oldrate/newrate):
                sample=i.read(2)
                if len(sample)<2:
                    break
I ordered that multiply and divide in the comparison condition explicitly to avoid having to worry about floating point precision. In C you'd need a 32bit integer to do such a big multiplication to begin with, although to use this code with even bigger source rates such as 48k that would break that limit, but if you only need 41k to 96k then you're fine with a 32-bit int. Maybe using a double precision float might be more capable, and still not drop precision if you divide first then multiply.
 
I've written a short python script to transcode a 41.1k source of samples to a 96k output. It runs like this:
Code:
oldrate=41100
newrate=96000 # newrate must be higher - this code can't downsample
with open('beep.raw','rb') as i:
    with open('beep96k.raw','wb') as o:
        sample=i.read(2)
        index=0
        while True:
            o.write(sample)
            oindex=index
            index+=1;
            if index>=newrate: index=0
            if int(oindex*oldrate/newrate) != int(index*oldrate/newrate):
                sample=i.read(2)
                if len(sample)<2:
                    break
I ordered that multiply and divide in the comparison condition explicitly to avoid having to worry about floating point precision. In C you'd need a 32bit integer to do such a big multiplication to begin with, although to use this code with even bigger source rates such as 48k that would break that limit, but if you only need 41k to 96k then you're fine with a 32-bit int. Maybe using a double precision float might be more capable, and still not drop precision if you divide first then multiply.


If I NEONize that code it will be float32 either way... (Neon doesnt have float64xn_t registers)
so double precision has 2 reasons to be slower, if I write fast and optimized code based on your python code, can you test for run time on the real pyra? If its less than 30ms per second its fast enough, as for precision - maybe we can offer configuration for slow decoder with naive double precision without SIMD for dsp?

Edit: I was wrong, there is neon double precision register, I will try to use it

ARM developer guide says it combines two float registers, which means that decoding while doing other flops might be slow unless multicore optimised
 
Last edited:
Seems like Grench might be able to test things out sooner than I can. I'm merely a normal pre-preorder, and I didn't pay extra for a prototype.

IIRC NEON can handle a uint64 in its 64-bit registers. I'd recommend doing the multiply first in that case.
 
Seems like Grench might be able to test things out sooner than I can. I'm merely a normal pre-preorder, and I didn't pay extra for a prototype.

IIRC NEON can handle a uint64 in its 64-bit registers. I'd recommend doing the multiply first in that case.

Sound like a plan, on previous page I already posted a function to double sample to 96, with this we got 44.1 to 96, seems enough doesnt it?
 
For the first step, sure. My code should also be able to handle 22.05kHz sample transcoding as well, which apparently is used by some samples for games that presumably don't use above 10kHz audio (nyquist would render anything above 11.025kHz being little more than noise).

That's assuming this is anywhere quick enough. I don't think you can precompute oldrate/newrate because even though I only need integer precision in the end, I can only guarantee that by multiplying it first then dividing the resulting huge number. It might be possible though, if the ratio stored in a floating point register gives 44100 exactly when rounded to int if you multiply it by 96000, that should guarantee the second boundaries line up at least, and from there I propose it's good enough to use.

Maybe it's fast enough to use anyway. I'm not sure how many cycles an integer divide takes from a NEON 64-bit register, and it needs to do that 96000 times a second, but even in my python form it seems to be somewhere around realtime.

But then you need to figure out getting this working inside the kernel, which I have no experience of myself.

And if someone figures out how to use the hardware transcoding first then this work is wasted, but there you go.
 
first paid prototype pyra shipped! this is an awesome milestone! congratulations ed! :cool:

I presume that is a sound played at boot-up, as opposed to a noisy spring?

Sounds pretty easily doable I imagine just by downloading the sound file and playing it... (via a start-up script)

i think he meant the audio-buffer release sound when the pandora sound-circuit goes to sleep. ;)
 
Back
Top