Pyra audio - needs to be fantastic


I don't think it will be a problem -- audio is trivial compared to video. The Pyra will probably be able to play any full HD movie just fine*, even if the display is only 720p. Playing too accurate sound is easy: just throw away the least significant bits and average the samples.
* Depending on how its encoded.

FTFY.

-God Ginrai
 
* Depending on how its encoded.
 
Which HD encoding would be a problem on an Omap5 ?
I can imagine that lossless compression in some exotic format, for which no hardware acceleration or optimized codec is available, might cause trouble. I don't think that anything you're likely to find "in the wild" should be problematic though.
 
* Depending on how its encoded.
 
Which HD encoding would be a problem on an Omap5 ?
I can imagine that lossless compression in some exotic format, for which no hardware acceleration or optimized codec is available, might cause trouble. I don't think that anything you're likely to find "in the wild" should be problematic though.
There's always that, true. But most codecs should play fine, especially x264 which is de facto the standard for internet video nowadays. 
 
anything 10-bit.

-God Ginrai
 Why would 10 bit color depth be a problem? Of course the display will only show 8 bit, so it's a bit pointless, but what's the problem? Just throw away 6 bits per pixel.
You must not watch 10-bit encodes. They require a lot of CPU power to encode at a decent speed. My fairly recent desktop computer (i7 processor) uses full CPU to decode it using the Plex decoder. Granted, the Plex decoder is inefficient, but there has been plenty of pushback in the Anime fansub community until just recently because people with older computers couldn't decode 10-bit anime at a reasonable speed. And you expect a mobile processor to do this?

-God Ginrai
 
Last edited by a moderator:
anything 10-bit.

-God Ginrai
 
Why would 10 bit color depth be a problem? Of course the display will only show 8 bit, so it's a bit pointless, but what's the problem? Just throw away 6 bits per pixel.
You must not watch 10-bit encodes. They require a lot of CPU power to encode at a decent speed. My fairly recent desktop computer (i7 processor) uses full CPU to decode it using the Plex decoder. Granted, the Plex decoder is inefficient, but there has been plenty of pushback in the Anime fansub community until just recently because people with older computers couldn't decode 10-bit anime at a reasonable speed. And you expect a mobile processor to do this?

-God Ginrai
I don't see what the theoretical problem is. Of course it could very well be the case that there is only an optimized implementation available for 8 bit and the decoder for 10 bit is unoptimized and very slow -- e.g. perhaps it does a full decoding to a 10 bit surface and then transforms that to an 8 bit surface. But that's just a matter of software. I don't think there's an inherent reason why 10 bit would be a huge lot slower than 8 bit -- especially if the target display is only 8 bit, so you can ignore the least significant bits early in the decoding. It will of course always be a bit slower, since even if you can effectively ignore all the irrelevant bits (which is unlikely), the extra fat in the file still causes some overhead.
 
dang it!!  I was about to post the same dude! hahaha.

 ...well they're planning to keep the same audio circuits the Pandora has. but I couldn't find what the highest audio quality it is that the Pandora can output.

Neil Young's PONO is able to reproduce 24 bit audio. which is quite impressive. it will also come with an internal storage of 128 gb and it will come with an SD card slot (at least that what I understood). wil cost $399.oo dollars.

Could someone tell what is the highest audio quality the Pandora can reproduce? has anybody tried reproducing HD tracks on a Pandora?
Don't even start drumming up the b/s about any format other than 16/44.1. Save it for Head-Fi.

 


Since human hearing maxes out at 20kHz or so, shouldn't the highest required sample-rate be around 40kHz?

44kHz sounds sensible, 190kHz is completely pointless

 
Not only is it pointless but anything higher than 44.1 is strictly worse because it allows supersonic frequencies to exist, which intermodulate with audible frequencies, causing distortion.  I also might add that supporting these is not an impressive feat by any means, and often requires that you make compromises somewhere else, which shits all over the collective sound quality.

The Pandora sounds damn great already, but it could do with a lower output impedance. Low impedance headphones (99.9% of the consumer market and a good 50% of the audiophile industry) start to lose bass extension on it.  A larger capacitor (maybe 470uF instead of the 330uF currently in place) would help to remedy this slightly, if it'll fit in the case. Otherwise, I'm happy with mweston's design choices.
 
Last edited by a moderator:
anything 10-bit.

-God Ginrai
 
Why would 10 bit color depth be a problem? Of course the display will only show 8 bit, so it's a bit pointless, but what's the problem? Just throw away 6 bits per pixel.
You must not watch 10-bit encodes. They require a lot of CPU power to encode at a decent speed. My fairly recent desktop computer (i7 processor) uses full CPU to decode it using the Plex decoder. Granted, the Plex decoder is inefficient, but there has been plenty of pushback in the Anime fansub community until just recently because people with older computers couldn't decode 10-bit anime at a reasonable speed. And you expect a mobile processor to do this?

-God Ginrai
I don't see what the theoretical problem is. Of course it could very well be the case that there is only an optimized implementation available for 8 bit and the decoder for 10 bit is unoptimized and very slow -- e.g. perhaps it does a full decoding to a 10 bit surface and then transforms that to an 8 bit surface. But that's just a matter of software. I don't think there's an inherent reason why 10 bit would be a huge lot slower than 8 bit -- especially if the target display is only 8 bit, so you can ignore the least significant bits early in the decoding. It will of course always be a bit slower, since even if you can effectively ignore all the irrelevant bits (which is unlikely), the extra fat in the file still causes some overhead.
IIRC those videos are usually in the http://wiki.bakabt.me/index.php/Hi10P format. Never watched one myself but that may give some idea on why it's slow. Basically there's no HW acceleration support available, and that format is used to get smaller file sizes without sacrificing image quality in material containing a lot of plain colors such as cartoons or anime. More compression, no acceleration, high CPU use.

Disclaimer: I know nothing about this and it's all hearsay on my part.
 
Where do you even find source material in 10 bit color depth? All DVDs and BDs are 8 bit, and I don't think there are many cameras that record at that depth. And where do you find a screen that displays at that depth?

Or are those two extra bits just always two zeros? ;)
 
Where do you even find source material in 10 bit color depth? All DVDs and BDs are 8 bit, and I don't think there are many cameras that record at that depth. And where do you find a screen that displays at that depth?

Or are those two extra bits just always two zeros? ;)
Very common in high-def compression of e.g. Anime. Quite annoying for low-power playback devices.
 
Very common in high-def compression of e.g. Anime.
The question is about the source: the anime is being compressed with 10 bit colour depth, but is the original source in 10 bit colour? They're originally ripped from the DVD and BD which are presumably 8 bits, so it seem the encoders are trying to add 2 bits of information that didn't previously exist and getting problems for it.
 
Could it be that the higher precision reduces compression artifacts even if the source material is of lower precision?

To give an oversimplified example of what I mean: say I wanted to compress pixel values 0 and 1 to one value by averaging. With no decimal places the compressed form would be either 0 or 1. With one decimal place it would be 0.5, which is arguably the desired outcome. The source material here has one-digit precision and the compressed format two digits. The higher value precision leads to less truncation (when for example averaging values) and as such less compression artifacts at higher compression ratios.

That is how I understand the rationale, at least.
 
Surely the additional file size from adding two meaningless bits to every sample completely cancels out the advantages of using additional compression.

The process seems to be:

8-bit data --> add two additional '0' bits --> lots of compression --> small file --> lots of decompression --> discard two bits -->8-bit output

8-bit data --> less compression --> small file --> less decompression --> 8-bit output

Since most video-processing hardware is optimised for 8-bit stuff, and extra processing steps generally degrade quality, why would you prefer the first of those two?
 
^ The links GG provided say otherwise. For the same image quality using 10 bit precision yields (according to one of the links) 5-20% lower bitrate. To fix your example you need to either put "smaller file" or "better quality 8-bit output" to the first one :)

Still never tried this. Arguing simply based on hearsay and presented material :p
 
Still never tried this. Arguing simply based on hearsay and presented material
Yeah, this sounds like one of those things that is "common sense" but something weird and deep is going on that common sense can't account for. I would have to see it for myself to really believe it, I think.
 
Sounds like a way of working around compression algorithm flaws.

Since the amount of original information is the same, the only way to make the final output closer to the original (in an ideal, theoretical world) is to throw away less or different information during the processing stages.

5-20% lower bitrate.
Lower than what?

In our ideal world, you would expect the (x-bit raw data):(compressed output) size ratio to go down by 20% since you've artificially inflated the raw data from 8-bits to 10-bits without adding any information.
 
Back
Top