The Pyra is crying!


The joys of hardware design. ED pretty much jinxed this from the start with the overconfidence he showed about it, I was just waiting for a mistake in the layout to show its ugly face :p
I was expecting a mistake or more, but was hoping it was one that doesn't prevent testing the rest!
 
The joys of hardware design. ED pretty much jinxed this from the start with the overconfidence he showed about it, I was just waiting for a mistake in the layout to show its ugly face :p
I was expecting a mistake or more, but was hoping it was one that doesn't prevent testing the rest!
Ah well your cautious approach is well founded.
Its got to be just right before more pcbs are produced.
 
There should totally be an option to "park" tabs that have been inactive for a long time. Either by simply reloading the page the next time the tab is opened, or saving them to disk for local loading once the tab is reopened. This would save a LOT of ram and, more importantly, swapping for people with reasonnable amount of RAM (4 to 8 GiB).
At least firefox doesn't load tabs in other tab groups until you open the group. I have several tab groups that I use like transient bookmarks. I have a "to read" tab group, several context-specific tab groups and a "general" tab group. I clear the "general" tab group every once in a while, putting any pages I still need to separate groups. This way when I open the browser it only loads a handful of pages, while still keeping the other tabs available. I use bookmarks only for permanent stuff.
 
There should totally be an option to "park" tabs that have been inactive for a long time. Either by simply reloading the page the next time the tab is opened, or saving them to disk for local loading once the tab is reopened. This would save a LOT of ram and, more importantly, swapping for people with reasonnable amount of RAM (4 to 8 GiB).
At least firefox doesn't load tabs in other tab groups until you open the group. I have several tab groups that I use like transient bookmarks. I have a "to read" tab group, several context-specific tab groups and a "general" tab group. I clear the "general" tab group every once in a while, putting any pages I still need to separate groups. This way when I open the browser it only loads a handful of pages, while still keeping the other tabs available. I use bookmarks only for permanent stuff.
It looks like tabs/windows and bookmarks are too crude an approximation of what some people really want: layers of cache.

  • pages currently using (switching between them, having multiple pages on screen) : these should be in-memory in rendered form, scripts running and all
  • pages not currently using : pause the scripts, after a while unload the scripts to free whatever memory they use
  • pages that are inactive for a while but might still be needed 'soon' : these could still be in-memory (as compressed html/css/js) but not rendered, so much more compact
  • pages that are inactive for a longer time: stored on disk (even the bits that are not supposed to be cached)
  • pages that are inactive for an even longer time: only store the URL and the bits that are supposed to be cached (rationale: if it's that long ago, the other bits have to be re-fetched anyway)
and most importantly: it should not be the user who decides when to move something to a deeper layer (e.g. by bookmarking it and closing the tab), but some kind of heuristic -- could even be somewhat user-adaptive.

Browsing the web should not be something that requires many gigabytes of memory -- after all, an average web page is about 2MB in size (mostly graphics) in transmission size, so 2GB should be enough to store 1000 pages (probably much more than that, because they're likely to share many resources like images, style sheets and script libraries).

Large obfuscated proprietary JavaScript programs are another problem: these effectively reduce your browser to a virtual machine that slowly executes sandboxed blobs. There are little possibilities for memory sharing with such an execution model, even though many pages will be based on the same libraries like JQuery.
 
Im crying because off topic, and the CPU Board seems to have some issues, and it need 5 Weeks to get a new one..

The only good thing is, i have more time to enjoy my Pandora :)
 
There should totally be an option to "park" tabs that have been inactive for a long time. Either by simply reloading the page the next time the tab is opened, or saving them to disk for local loading once the tab is reopened. This would save a LOT of ram and, more importantly, swapping for people with reasonnable amount of RAM (4 to 8 GiB).
At least firefox doesn't load tabs in other tab groups until you open the group. I have several tab groups that I use like transient bookmarks. I have a "to read" tab group, several context-specific tab groups and a "general" tab group. I clear the "general" tab group every once in a while, putting any pages I still need to separate groups. This way when I open the browser it only loads a handful of pages, while still keeping the other tabs available. I use bookmarks only for permanent stuff.
It looks like tabs/windows and bookmarks are too crude an approximation of what some people really want: layers of cache.

  • pages currently using (switching between them, having multiple pages on screen) : these should be in-memory in rendered form, scripts running and all
  • pages not currently using : pause the scripts, after a while unload the scripts to free whatever memory they use
  • pages that are inactive for a while but might still be needed 'soon' : these could still be in-memory (as compressed html/css/js) but not rendered, so much more compact
  • pages that are inactive for a longer time: stored on disk (even the bits that are not supposed to be cached)
  • pages that are inactive for an even longer time: only store the URL and the bits that are supposed to be cached (rationale: if it's that long ago, the other bits have to be re-fetched anyway)
and most importantly: it should not be the user who decides when to move something to a deeper layer (e.g. by bookmarking it and closing the tab), but some kind of heuristic -- could even be somewhat user-adaptive.

Browsing the web should not be something that requires many gigabytes of memory -- after all, an average web page is about 2MB in size (mostly graphics) in transmission size, so 2GB should be enough to store 1000 pages (probably much more than that, because they're likely to share many resources like images, style sheets and script libraries).

Large obfuscated proprietary JavaScript programs are another problem: these effectively reduce your browser to a virtual machine that slowly executes sandboxed blobs. There are little possibilities for memory sharing with such an execution model, even though many pages will be based on the same libraries like JQuery.
You could create an zram swapspace with an higher priority than the disk swapspace. This Way it will not take the inactivity time into account, but its better than nothing.
 
The joys of hardware design. ED pretty much jinxed this from the start with the overconfidence he showed about it, I was just waiting for a mistake in the layout to show its ugly face :p
I was expecting a mistake or more, but was hoping it was one that doesn't prevent testing the rest!
I actualy expected much more errors with that complex 10 layer board. Especialy when it was designed basicly by hand and by one person. So if there is only one mistake, it's already very impressive.But to be realistic, there may be much more issues coming up that need some redesign. No hardware works perfect with it's first prototype.

I realisticly expect the Pyra late 2016 IF everything goes well. If there can be done more "parallel" stuff that speed up the development and production of varipus parts (boards, case, keymat) it may be earlier but better be safe than sorry.

P.S.: I never have more than a handful tabs open when browsing. My PC has 4GB RAM and it's enough for my needs (so far). More than 6 Tabs ---> messy! :p
 
I've gotten to over 8GiB memory use mostly with several virtual machines running while using two heavy IDEs (a project has design and template files in both, which sucks). Just because of that case I need 16GiB in my workstation, but I've never needed swap at home after going 16GiB.
Well yeah, I know it's quite easy to need a lot of RAM for some specific reasons, but the point is that these are use cases outside of what most people would consider "normal use". Not a lot of people run VMs on a regular basis, and certainly not on-the-go.
B-ZaR is trying to do a direct correlation of the RAM he uses on a desktop X86 computer to this handheld computer - which isn't the same.  He's likely using Windows (huge RAM requirements) to run multiple Windows VMs (huge RAM requirements each) or some mixture with full desktop Linux configurations - giving each a couple of GB of dedicated RAM for that instance.

Wizardstan seems to have the right idea, that we won't NEED that much RAM, but not running VMs isn't the reason why - AND at least a few of us ARE planning to run VMs continuously on-the-go.  They just aren't X86 Windows VMs.

Once all the pieces come together, I would like to use Debian on the Pyra as a host OS for at least two Android VMs, isolated from each other.  

One secure Android one with few to no downloaded apps to handle email and contacts and the like.  I plan to give this one 128MB RAM and an 8GB section of an SD card to live in, which is overkill.  If we have the ability to do so, I would give it a 1/8th of one CPU core to run in.  It would run semi-continuously and check for communications every 15 minutes or so.

One for Android games - this one gets 256MB of RAM and a 64GB section of an SD card to live in.  It would sit in sleep mode anytime not being actively used as the 'on top' application.  If we can have a network VM software bridge connect/disconnect switch, it wouldn't have access to the data feed unless I was actively downloading something to it.

The point is, 2GB of RAM is plenty to run the base Debian install plus multiple VMs on this ARM device.  The Windows X86 power workstation with Windows VMs running is not an accurate comparison.  This is an ARM system, and running ARM VMs should be possible without running out of RAM.
 
Last edited by a moderator:
B-ZaR is trying to do a direct correlation of the RAM he uses on a desktop X86 computer to this handheld computer - which isn't the same. He's likely using Windows (huge RAM requirements) to run multiple Windows VMs (huge RAM requirements each) or some mixture with full desktop Linux configurations - giving each a couple of GB of dedicated RAM for that instance.
Waaait wait wait wait. Wait. Hold it.

I was merely pointing out a specific situation when I even broke 8GiB let alone the preposturous amounts the post I was referring to. I run Arch Linux with my current memory usage at 2.2GiB with Firefox taking 1GiB,  so I wouldn't call my setup too memory hungry even if I do run full KDE5. At the time that 8GiB usage was broken, I was running two full java application server VMs, one Oracle VM, a windows VM for testing stuff on internet explorer, JDeveloper (for DB design) and Netbeans (for code at the time) IDEs.

No, this is not the case anyone will probably run into with pyra. I was merely pointing out how ludicrously much I had to have running to even break the 8GiB threshold to give persepective on how insanely huge the aforeposted memory consumption sounded.
 
Last edited by a moderator:
It looks like tabs/windows and bookmarks are too crude an approximation of what some people really want: layers of cache.

  • pages currently using (switching between them, having multiple pages on screen) : these should be in-memory in rendered form, scripts running and all
  • pages not currently using : pause the scripts, after a while unload the scripts to free whatever memory they use
  • pages that are inactive for a while but might still be needed 'soon' : these could still be in-memory (as compressed html/css/js) but not rendered, so much more compact
  • pages that are inactive for a longer time: stored on disk (even the bits that are not supposed to be cached)
  • pages that are inactive for an even longer time: only store the URL and the bits that are supposed to be cached (rationale: if it's that long ago, the other bits have to be re-fetched anyway)
and most importantly: it should not be the user who decides when to move something to a deeper layer (e.g. by bookmarking it and closing the tab), but some kind of heuristic -- could even be somewhat user-adaptive.
I'd definetely switch to a browser implementing that kind of caching.
 
At work I have 8GB of RAM, and sometimes I do end up hitting swap, but I wouldn't if I never had to run any memory leaking browsers. It's the only machine where I always have a browser open (I do Web development on it). That's Ubuntu Studio (Xfce based) with a Windows 7 VM running all the time for certain things I need to run on Windows. It never hits swap the first few days I have it running (so of course a reboot takes it well away from swap again). I would like more RAM for this setup so that I could run another VM without shutting down the first one.

My home desktop has 16GB of RAM and I never come close to swap regardless of what I'm doing on it (it also has Ubuntu Studio).

I have a few laptops available and currently none have more than 4GB of RAM and I never seem to have the problem of hitting swap on them. One is a converted Chromebook, one is a netbook from a few years ago, and the others are old machines that have been written off by the company I work for or given to me by someone (I recycle them by putting Linux on them and then give them to relatives).
 
Last edited by a moderator:
The joys of hardware design. ED pretty much jinxed this from the start with the overconfidence he showed about it, I was just waiting for a mistake in the layout to show its ugly face :p
I was expecting a mistake or more, but was hoping it was one that doesn't prevent testing the rest!
I actualy expected much more errors with that complex 10 layer board. Especialy when it was designed basicly by hand and by one person. So if there is only one mistake, it's already very impressive.But to be realistic, there may be much more issues coming up that need some redesign. No hardware works perfect with it's first prototype.
Indeed. This is why prototypes (usually several generations) are needed. For the main board of the Pyra we currently have the fourth and the fifth is planned to go to mass production. So we would have been very lucky if the first CPU board would have been 100% perfect.
But PCB layout is almost always done by one person. It can rarely be split and parallelized (well, multiple PCBs can - if the positions of connectors and pin assignments are negotiated). The trick is to have EDA tools that do design rule checks (e.g. minimum distance between chips, wire traces etc.) while doing the hard manual work. This avoids most of the simple errors. The question is if this can be still recognized as being done "by hand"... In my experience auto-routers are not useable for such complex boards. They are good for very simple projects.

What no tool can avoid is a mistake introduced when translating printed data sheets into the EDA tool. This is very comparable to retyping a text from a printed textbook into a different word processor. Typos happen. And are unavoidable and pass unnoticed by the person who types, even if careful. They also become visible only if a first proof is printed and someone looks on it and sees the typo. And unfortunately predefined component libraries are not available for everything and in all data formats.

In fact we have another such translation bug - but it is not a visible problem since it does not affect the copper wires. The component placement printing for the DRAMs is shifted by 0.4mm. So the chips do look still misplaced, even if placed correctly. And because of the same problem they were misplaced initially and we have got them (almost) repaired now.

Another potential problem is that wrong settings of the design rules can make a layout unuseable. If rules are not correctly strict, the layout designer can do wrong things without notice. But for this, the PCB manufacturer does his own tests so that such errors don't pass the production entry guard. But the PCB manufacturer can not check for logical errors as described before.
 
Last edited by a moderator:
  • Like
Reactions: Gon
I personally would have been very happy with 4GB.

I'm sure the 2GB will take us far... but I intend to use the Pyra as my primary computing device, and 2GB to do heavy graphics work has me concerned.
4 GB is far too little for primary computing device IMO.My desktop has 16 GB and I'm constantly swapping it to 24-25 GB.

Maybe 2 GB will be sufficient for my intended use (VNC + minimal local stuff like Uber), but the more the better IMO...
Windows user? I have yet to use more than 1GB on the devboard even with a heavy amount of multi-tasking. Not sure what graphic work you plan on doing, but I think you will run into CPU loading issues before the memory becomes a problem.
Edit: For my typical Linux gaming/desktop I still don't think I needed 4GB let alone the 8GB...
No, I run 32-bit Linux (with a 64-bit kernel to handle the physical memory).I don't do any graphics work, and due to shared memory I can't even be certain what uses so much, but I suspect Chromium is the biggest offender.

My desktop has 16 GB and I'm constantly swapping it to 24-25 GB
HOLY CARP :blink: What are you doing that you consider "normal" to use that much? I've got 8GB and the only time it ever hits swap is when I open up 50+ tabs in Chromium.

Even non-"normal" stuff like compiling massive projects or loading large models in Blender generally stay below 4GB.
I use 10-20 virtual desktops, with multiple browser windows on some and many tabs in each window.Chromium's task manager tells me some chat page I use for work called "Mattermost" is using 452 MB memory, the browser itself 405 MB, Google Docs 300 MB, Etherpad 278 MB, Google Voice 231 MB, Craig's List 203 MB, "GPU Process" 184 MB, some bugzilla 185 MB, etc...

I'm also pretty sure that Bitcoin Core alone puts about 1 GB of memory to good use.

Kontact seems to be around 528 MB. KDE's MySQL eats 428 MB. Plasma 4 uses 360 MB. Clementine uses 283 MB. X eats 248 MB. Konversation (IRC) uses 242 MB. Just KRunner uses 170 MB. Sigh.
 
Consider also, from those numbers, that just the basic KDE desktop playing music and checking email, needs >2 GB...
 
Consider also, from those numbers, that just the basic KDE desktop playing music and checking email, needs >2 GB...
Haven't used KDE in over a decade, that statement reminds me why I don't.
 
Indeed. This is why prototypes (usually several generations) are needed. For the main board of the Pyra we currently have the fourth and the fifth is planned to go to mass production. So we would have been very lucky if the first CPU board would have been 100% perfect.

But PCB layout is almost always done by one person. It can rarely be split and parallelized (well, multiple PCBs can - if the positions of connectors and pin assignments are negotiated). The trick is to have EDA tools that do design rule checks (e.g. minimum distance between chips, wire traces etc.) while doing the hard manual work. This avoids most of the simple errors. The question is if this can be still recognized as being done "by hand"... In my experience auto-routers are not useable for such complex boards. They are good for very simple projects.

What no tool can avoid is a mistake introduced when translating printed data sheets into the EDA tool. This is very comparable to retyping a text from a printed textbook into a different word processor. Typos happen. And are unavoidable and pass unnoticed by the person who types, even if careful. They also become visible only if a first proof is printed and someone looks on it and sees the typo. And unfortunately predefined component libraries are not available for everything and in all data formats.

In fact we have another such translation bug - but it is not a visible problem since it does not affect the copper wires. The component placement printing for the DRAMs is shifted by 0.4mm. So the chips do look still misplaced, even if placed correctly. And because of the same problem they were misplaced initially and we have got them (almost) repaired now.

Another potential problem is that wrong settings of the design rules can make a layout unuseable. If rules are not correctly strict, the layout designer can do wrong things without notice. But for this, the PCB manufacturer does his own tests so that such errors don't pass the production entry guard. But the PCB manufacturer can not check for logical errors as described before.
Thanks for some inside views of that complex process. We had some very basic circuit layout lessons back then in Tech school, including an simple program that helps there. But even this was not easy to do, at least not for me. ^^" So I find your work even more impressive, good luck to find all errors before making another Prototype board to reduce the required generations of those. :)
 
I have an acer C720 chromebook running Ubuntu with 2gb ram and it gets bogged down fairly easy doing more than one thing at a time. If I want to open more than 3 or 4 tabs in Firefox, I need to make sure nothing else is running. It's sufficient for what I use it for, but I certainly wouldn't want to use it as a primary device.
 
KDE doesn't need that much RAM. I'm running it on my devboard and never had memory issues even when running GIMP, FireFox, LibreOffice, etc. at the same time with it.
 
I have an acer C720 chromebook running Ubuntu with 2gb ram and it gets bogged down fairly easy doing more than one thing at a time. If I want to open more than 3 or 4 tabs in Firefox, I need to make sure nothing else is running. It's sufficient for what I use it for, but I certainly wouldn't want to use it as a primary device.
Doesn't seem to be much of a problem on the Devboard... I still can't easily push it over 1GB with just web browsing and even a few other things going on, IRC, LibreOffice and a game running.Screenshot:

Performance.png
 
Last edited by a moderator:
Back
Top