Interview: Jim Gettys (Part I)
Jim was kind enough to take what must have been a considerable amount of time to answer our questions on this project. What follows is the first part of the interview.
LWN: Could you briefly describe your role with the OLPC project?
The educational software and content are the province of others: Nicholas Negroponte (the OLPC chairman), Walter Bender, Seymour Papert, Alan Kay, and others, who have decades of experience in education of children with computers, often in the developing world.
I also don't worry about how the bits get from machine to machine: Michail Bletsas is our Chief Connectivity Officer. Mary Lou Jepsen is our CTO, and responsible for our novel display technology, and Mark Foster is V.P. of Engineering and chief hardware architect. Quanta Computers, founded by Barry Lam, who make almost 1/3 of the laptops of the world, are building the OLPC machine.
It appears that few people appreciate the extent to which this project is pushing the leading edge of free software development.
What are the features one would want for school-aged children, grades K-12? A large fraction of such children are in parts of the developing world where electricity is not available at home, or often even at school, so for many children, a computer with low power consumption, potentially human-powered, is a necessity, not a convenience.
Teaching may not even be inside, and certainly when children are at home, they often will not be inside where conventional LCD screens are usable. Children usually walk to and from school every day; weather is unpredictable, rain, dirt and dust are commonplace. And cost is a major consideration, if we are to bring computers and their great power to help children learn, to children everywhere.
Much more about the hardware can be found in our wiki.
Consider the power management issues, application slimming, system (non-)management improvements, mesh networking, application checkpointing, pervasive IPv6, localization problems, etc. Every one of these goals should benefit users who will never see an OLPC system. How many of these goals do you think you will be able to achieve by launch time?
Power management: We are doing at least two, if not three, true innovations in this area:
- The Marvell wireless chip, which has an ARM 9 and 92K of RAM, can
forward packets in the mesh network while the processor is suspended to
RAM. This capability has been demonstrated in the lab, and Michail
Bletsas is confident of the outcome; in fact, it was an actual
demonstration that convinced us to use Marvell. Other wireless vendors
lack this capability. Our current estimate is that in this mode, the
wireless chip can be forwarding packets and the system consuming less
than a half a watt. We want there to be as little incentive to defeat
wireless as possible, so this is a key innovation: if children aren't
confident there will be power when they need it, they might work to
defeat the mesh.
- The display can be on while the processor is suspended, saving
power. In some modes, we expect to be suspending the CPU whenever it is
idle, even for times as low as a second or two. Since our display is
also novel and consumes much less power than conventional LCD's, even
the Geode's low power consumption would have otherwise dominated total
energy use.
- Look around you the next time you sit in a conference room. How many of you are actively using your machine at any given instant? How much of the time are you just reading the screen? In many modes of use, once the screen power consumption has been solved (as it is in our display), the remaining major power consumption is the processor, power supply and motherboard components. By making suspend/resume unnoticeable, we can save most of the remaining power used in the system.
Mark Foster described his novel extremely fast suspend/resume software technique at the Linux Power Management Summit this spring. Whether we will need to implement it on our hardware to reach our goals of < 200ms suspend/resume cycles awaits some laboratory tests (an iPAQ can already suspend and resume in a subsecond period), but I expect we may need to implement this technique. Any performance work *must* be preceded by measurement to be useful: spending time optimizing the wrong code is a waste. Of course, the faster suspend/resume can be made to work, the more aggressive we can about suspending and saving power. This is an example of an area where incremental improvement (once basic capabilities) is possible.
We are also planning to dynamically change the refresh rate of the screen depending on screen activity; as I've seen this capability in graphics chips for cell phones, I won't claim this as full innovative, though it will be new for the X Window System or window systems on laptops.
It is hard to predict how long similar hardware capabilities will take to reach conventional hardware; but by showing it is possible, we know it will happen and the software support required be useful to everyone.
There are also a number of places where changes in Linux and the desktop environment can help. For example, the tickless patches currently being worked on obviate the need for the CPU to wake up 100 times a second; the more of the time a processor is fully idle, the more power saved. Another example are places where the desktop environments are polling periodically to find out changes in the system: notification systems are much more efficient, and allow the system to be idle more of the time.
Out of memory behavior needs serious work: the current OOM killer's policies are by current necessity very poor. Nokia has been experimenting with more useful policies, exploiting information at the user environment level, that can improve this behavior by informing the kernel which processes are the most vital and which can be shot.
Application slimming: There seems to be a common fallacy among programmers that using memory is good: on current hardware it is often much faster to recompute values than to have to reference memory to get a precomputed value. A full cache miss can be hundreds of cycles, and hundreds of times the power consumption of an instruction that hits in the first level cache. Making things smaller almost always makes them faster (and lower power). Similarly, it can be much faster to redraw an area of the screen than to copy a saved image from RAM to a screen buffer. Many programmer's presumptions are now completely incorrect and we need to reeducate ourselves.
Sometimes we may just choose alternative applications. Of course, this may not be what some application writers would like, and the solution they can take is obvious. We have a large set of software to choose from: this is one of open source's great strengths.
Federico Mena-Quintera and others have been doing some very nice work identifying and fixing some of the gratuitous memory usage.
A large part of this task is raising people's consciousness that we've become very sloppy on memory usage, and often there is low hanging fruit making things use less memory (and execute faster and use less power as a result). Sometimes it is poor design of memory usage, and sometimes it is out and out bugs leaking memory. On our class of a system, leaks are of really serious concern: we don't want to be paging to our limited size flash.
In fact, much of the performance unpredictability of today's free desktop can be attributed to the fact that several of our major applications are wasting/leaking memory and driving even systems with half a gigabyte of memory or more to paging quite quickly. Some of these applications we care about, and some we don't: OpenOffice is just not the right tool for someone learning to read and write, and we'll be perfectly happy to use other tools. Some other major offenders need fixing (and work has started): e.g. Firefox (Gecko), which, when using tabs, has been hemorrhaging memory, which can force you to paging quite quickly. Between evolution-data-server and Firefox alone, many people's desktops exhibit unpredictable performance soon after boot due to paging; fixing these problems will benefit all.
Tools: The memory usage display tools we have today are very misleading to naive (and even journeyman) programmers, often leading them to massively wrong conclusions.
My biggest personal frustration (given my history with X) are people saying: "X is bloated". The reality is: a) X maps all the frame buffer and/or register space into its address space, so measurement of virtual address spaced used is completely misleading: X may be actually consuming only a very small amount of your DRAM, despite a virtual size of a hundred megabytes, and b) X does what its told: many applications seem to think that storing pixmaps in the X server (and often forgetting about them entirely) is a good strategy, whereas retransmitting or repainting the pixmap may be both faster and use less memory. Once in a while there is a memory leak in X (generally in the graphics drivers): but almost always the problem are leaks in applications, which often forget the pixmaps they were using.
RAM in the X server is just as much RAM of your program, though it is in a different address space. People forget that the X Window System was developed on systems with 2 meg of RAM, and works today on 16 megabyte iPAQ handhelds.
We need better tools; some are beginning to appear. OLPC is sponsoring a Google Summer of Code student, Eduardo Silva, from Chile, who is working on a new tool called Memphis to help with this problem.
Work done on memory consumption will benefit everyone: not everyone in the world has a 2ghz laptop with a gig or two of RAM...
System (non-)management improvements: I think there are two, mostly separable areas here: 1) the kid's laptop, on which we want to focus primarily on making "easy to fix", rather than "hard to break", so interested children can learn computing. And by working hard to automate backup, we'd like the restore of a laptop to be dead simple if there is some problem. By using LinuxBIOS, we expect to be able to boot and reinstall via the network easily. Requiring cables and/or USB keys for restore are costly and complicate logistics greatly.
2) the school servers need to be "hard to break" as well as "easy to fix", and remotely manageable, as finding expertise a remote school of 10 children and one teacher is very unlikely. This is one of the factors driving us to IPv6 (much more below), since NATed IPv4 islands cannot be easily remotely diagnosed or updated automatically without expertise on the ground, which will often be rare in our deployment areas.
I've recently become impressed by technology developed for and by PlanetLab that Dave Reed brought to my attention. It is worth everyone's careful look. See (www.planet-lab.org).
Mesh networking: Pulling wires and having access points are very expensive and requires expertise, neither of which may be available; and we want kids to be able to work together anytime they meet up, even under a tree 3 kilometers from nowhere.
MIT Roofnet and other projects have shown the feasibility of mesh networking, where one machine forwards packets on behalf of others. Michail Bletsas is OLPC's expert in this area, and has a lot of first hand experience. In radio quiet areas, quite long links become feasible; in urban areas much shorter links are only feasible, but the density of machines is likely much higher.
Our system is interesting in a number of ways beyond mesh software:
- it has antennae that can be rotated up above the top of the machine
and are more efficient than what you find in a conventional laptop; this
should roughly increase the footprint of each machine by a factor of
four (in area).
- the Marvell wireless chip we are using can operate autonomously. So it can forward packets in the mesh even if the processor is suspended to RAM! This should cut power consumption for an unused laptop to well under one watt (current estimate is about .5 watts), while still providing support to other machines in the mesh.
One of the challenges that the community can help later this year is to help learn which techniques work best when the nodes of the mesh are mobile machines. There are a number of routing protocols possible (some of which should become power aware; not all machines may need to bother to forward packets all the time), and which will work best in what circumstances should be fun to learn.
Application checkpointing: 128 megabytes of memory is enough to run (almost) any open source application; there are a few exceptions, but few that are of educational interest for young children. It isn't enough, on a system where paging needs to be avoided, to run arbitrary numbers of the larger applications at the same time.
In addition to the community working on dieting our environment (and making it run faster as a result), application check-pointing could help the user's experience greatly. When memory runs low, idle applications not currently in use could save their state and be restarted later at the same point. We see some of this being done in Maemo on the Nokia 770; the conventions for this need freedesktop codification and implementation in applications.
Pervasive IPv6: In the developed world, we do not have a shortage of IPv4 addresses at this time. We got to the Internet first, and grabbed enough "land" that we don't yet feel the pain felt in other parts of the world.
We see differently from where we sit. IPv6 to us is clearly essential on a number of grounds:
- address space, and not wanting a flag day conversion that would be
very difficult. There are good
arguments that we have effectively exhausted the IPv4 address space, and that even
conservation measures cannot change the situation by more than a year or
two. In the developing world the situation is already dire. In some
places, entire universities are hidden behind a single routable IPv4
address, and in others, NAT's are as much as 5 levels deep.
Vint Cerf told us that part of this problem is artificial: some cultures are so worried about losing face if they were turned down that they have not been asking for addresses, even though they would have been granted. And part of it is very real indeed: Brazil is planning a deployment of 100,000,000 IP TV sets, for example; this cannot be done using IPv4. And we hope to be deploying at such scale within a few years as well. Since the cliff is already visible and we'd just as soon not fall off it; it hurts so bad when you hit the bottom.
- it is impossible to diagnose problems if you can't observe them.
Initially, in many parts of the world, we have to presume limited
expertise is available on the ground, so local diagnosis could easily
become the limiting factor for deployment. If the school networks are
fragmented by NAT's, remote diagnosis becomes much more complicated.
- Building collaborative applications (or almost any new network application) has become extremely difficult due to the extensive deployment of NATs in the Internet: Skype is one of the few applications, that by standing on its head in many ways, has succeeded in (usually) working despite this disaster. Building such applications becomes radically easier if we go back to the end to end principles of the Internet. NAT has made it very difficult to deploy new applications.
Given tunneling technology (and 6to4, when routable addresses are available), in concert with the IPv6 deployment that has begun in many parts of the world, we can again have a clean end-to-end network, in which kids anywhere can share with their peers all over the world.
So our judgment is that he time has really come, and (almost) all applications are finally ready.
Localization problems: According to the Ethnologue web site, there are 347 languages with more than one million speakers in the world, that covers 94% of the world's population. We already see localization in open source systems for languages with fewer speakers - one million speakers. If we continue along the current path of localization, we're going to find ourselves with a real problem within several years.
While I expect the current mechanisms and processes might get us through the first round of deployments, the year after, this problem will become more acute. As a community, we need to recognize this approaching problem and rise to the challenge. I wrote in more detail in my blog.
Are you getting the needed level of assistance from the community in reaching these goals?
We are distributing almost 500 bare mother boards to enable people to help on drivers, power management, code optimization (which not only makes things faster, but reduces power consumption), mesh network experimentation, etc. And there will be further opportunities later in the program during beta test later this year.
What do you most urgently need help with at this time?
The memory consumption problems, and how to manage low memory situations is also key. It would help greatly if applications would bother to be able to checkpoint their state and restart exactly where they left off.
Let's take one of those goals: paring down applications so that they fit into the OLPC's memory. This is clearly an activity which benefits everybody - bloated applications are slow applications. Are you making progress in putting the needed tools on a diet?
We have a simple principle everyone should be aware of: if your application is bloated, it's much less likely people will be able to use it on the machine. There are usually alternatives for any particular piece of software. Given the healthy competition in free software, there is only a much smaller subset of software that we care about to the point of fixing it ourselves. If you want your software to be usable, please make it so: and everyone will benefit with leaner, faster applications, not only OLPC.
How are the upstream communities responding to debloating patches?
Often, rather than patches, it is helping people understand there are problems that need to be fixed. Chris Blizzard, who is on the board of the Mozilla corporation, now works on OLPC (he's in charge of the Red Hat team), and the Firefox team are finally aware they have a serious problem and test cases are being generated. Chris says some progress has already been made. Much more is needed, and there are viable alternatives we could use if Firefox does not come through. But we think they probably will by the time we will ship in volume.
Many thanks to Jim Gettys for taking the time to answer these questions.
The second part of the interview will appear next week.
Posted Jun 28, 2006 20:16 UTC (Wed)
by dwheeler (guest, #1216)
[Link] (9 responses)
Posted Jun 28, 2006 21:56 UTC (Wed)
by allesfresser (guest, #216)
[Link]
Posted Jun 28, 2006 21:58 UTC (Wed)
by Yorick (guest, #19241)
[Link]
I could definitely imagine compressed pixmaps being a useful way to lower the memory
On a lower level, compressing memory pages that would normally have been swapped out to
Posted Jun 29, 2006 2:20 UTC (Thu)
by dlang (guest, #313)
[Link]
remember that for this you have to use a lossless compression, and those are famous for having different behavior on different sets of data.
however, the real problem is that X is doing exactly what it's been told to do. programs should know what images are worth keeping anround and which ones aren't, too many of them default to keepign lots of things around and never useing them.
it may be an interesting thing to explore storing these pixmaps on the video card itself, I don't know if the current X drivers can support this or not.
Posted Jun 29, 2006 2:41 UTC (Thu)
by guinan (subscriber, #4644)
[Link]
I googled but couldn't find a definitive reason why it was deleted,
Posted Jun 29, 2006 9:52 UTC (Thu)
by jg (guest, #17537)
[Link]
Care to work on a compressed image transport extension? Then storing
I warn you that the last time this was attempted, that the people doing
None the less, it is on the list of things I'd like to see/have: care to work on it?
BTW, on a local machine, moving the images is *really* fast. Just clock
Posted Jun 29, 2006 10:08 UTC (Thu)
by kleptog (subscriber, #1183)
[Link] (1 responses)
Personally I think more use should be made of the XShm extension, when the client stores the images in shared memory and the X server can see them. One thing graphics processors can do is blit images and convert colour and layout on the fly.
I think what would really help is a way to see what pixmaps (and other resources) are currently stored in the server. That would make it easier to see if anything is being leaked.
Posted Jun 29, 2006 20:50 UTC (Thu)
by oak (guest, #2786)
[Link]
Posted Jul 2, 2006 7:30 UTC (Sun)
by giraffedata (guest, #1954)
[Link] (1 responses)
Posted Jul 5, 2006 1:52 UTC (Wed)
by k8to (guest, #15413)
[Link]
However, knowing this, we non-pedantic long-time Unix users still call it X Windows all the time, along with other names.
Posted Jun 28, 2006 20:45 UTC (Wed)
by cventers (guest, #31465)
[Link]
In all seriousness, I think the OLPC project is fantastic, and very much
Posted Jun 28, 2006 21:09 UTC (Wed)
by faassen (guest, #1676)
[Link] (7 responses)
Posted Jun 28, 2006 22:00 UTC (Wed)
by allesfresser (guest, #216)
[Link] (6 responses)
Posted Jun 28, 2006 23:00 UTC (Wed)
by drag (guest, #31333)
[Link] (1 responses)
Because I realy realy realy freaking want one of these things and I don't want to get it from some bozo stealing laptops from children to sell on ebay.
Posted Jun 29, 2006 11:13 UTC (Thu)
by smitty_one_each (subscriber, #28989)
[Link]
Posted Jun 29, 2006 0:14 UTC (Thu)
by gjmarter (guest, #5777)
[Link]
Posted Jun 29, 2006 0:33 UTC (Thu)
by dododge (guest, #2870)
[Link] (1 responses)
Posted Jun 29, 2006 9:56 UTC (Thu)
by jg (guest, #17537)
[Link]
We're focused right now on getting the kids machine out, and what
Posted Jun 29, 2006 9:54 UTC (Thu)
by jg (guest, #17537)
[Link]
Posted Jun 29, 2006 8:42 UTC (Thu)
by job (guest, #670)
[Link] (6 responses)
Posted Jun 29, 2006 9:15 UTC (Thu)
by wstephenson (guest, #14795)
[Link] (5 responses)
Posted Jun 29, 2006 9:57 UTC (Thu)
by jg (guest, #17537)
[Link] (3 responses)
Evolution, no.
We need a good browser, but Evo is way to complicated (and huge) to be
Posted Jun 29, 2006 10:34 UTC (Thu)
by drag (guest, #31333)
[Link] (1 responses)
If they get that xulrunner stuff going well then that should cut down on the size and hopefully the memory requirements of both applications(, correct?)
Posted Jun 29, 2006 14:49 UTC (Thu)
by nix (subscriber, #2304)
[Link]
Posted Jul 2, 2006 21:11 UTC (Sun)
by Lovechild (guest, #3592)
[Link]
Firefox being not only a shitty application with poor UI design and hidious memory/CPU use, it's not easily slimmed down. KHTML (or gtk-webcore) with a custom UI would be a much saner choice, maybe looking at what Nokia has been doing with the wonderful 770 device. With a small screen we need to make the most of it and the 770 has really shown that it is possible to design applications on small devices with full functionality.
Anyways on to business, Philip Van Hoof' tinymail project might fill your email niche, it uses next to no memory and is deadfast. Not only is this a wonderful project in itself but Philip is developing it as a pace that can only be explained by having caffine directly injected into his bloodstream.
Finally I want to thank you for picking Red Hat as your technology partner in this venture and helping the free software community think about the lofty goals we should set for our work.
I am, of course, a pledger to buy one at overprice to help children get education.
- David Nielsen
Posted Jun 29, 2006 10:12 UTC (Thu)
by kleptog (subscriber, #1183)
[Link]
Posted Jul 3, 2006 11:52 UTC (Mon)
by nhippi (subscriber, #34640)
[Link]
This is incorrect. Prism54 and it's newer versions by conexant have a arm9 and 64K Of RAM (fullmac chips had even more). There is even a reverse engineering project to run your own firmware on it:
http://jbnote.free.fr/islsm/doku.php?id=documentation:isl...
> In most areas, we're still pedal to the metal on basic problems like device drivers, and finishing up LinuxBIOS + Linux as boot loader so that we can support installation over the (mesh) network.
I hope OLPC isn't tying themself too tightly to X86 and geode platform.
One question, maybe the interviewer can ask Gettys - why not just
have X-Windows store the COMPRESSED image? He notes that "many applications seem to think that storing pixmaps in the X server (and often forgetting about them entirely) is a good strategy, whereas retransmitting or repainting the pixmap may be both faster and use less memory.", and he mentions Federico Mena-Quintera's work. But if they're stored in the client, and sent each time to X-Windows, you now have to at least resend... and X-Windows isn't always local; over a mesh in particular you DON'T want the image retransmitted each time. If you have to store it anyway, why not have X-Windows store compressed, not uncompressed images? Indeed, if X-Windows could accept common compressed formats, and recompress when it feels like it, it could have all the memory benefits he notes...!
Why not have X-Windows store the COMPRESSED image?
Possibly the compression and decompression would eat a lot of CPU, and therefore power, which would be bad. I could see that being the case, especially if there's a lot of not-very-compressible pixmaps floating around.Why not have X-Windows store the COMPRESSED image?
That would certainly decrease memory usage but perhaps not at much as regenerating the Why not have X-Windows store the COMPRESSED image?
pixmaps. A compressed image is just a subroutine written in a very compact language
specialised for generating broad classes of images. The application's own code and data to
regenerate the pixmaps are in most cases already in memory, and the code can usually be more
efficient than a compressor would be as it is adapted to its purpose.
requirements of existing applications. It just needs some heuristics to avoid compressing those
used frequently.
disk could make sense on a swapless machine. (This could be an interesting kernel project.)
what compression algorithm are you thinking of useing? Why not have X-Windows store the COMPRESSED image?
There was an X Image Extention (XIE) some time ago, which basicallyWhy not have X-Windows store the COMPRESSED image?
did what you're describing, but it got deleted from the tree in
X11R6.7.0.
but I gather it wasn't popular with some developers.
It is a good question.Why not have X-Windows store the COMPRESSED image?
compressed images in the server would probably be straightforward.
XIE slipped down the slippery slope into complexity hell, to the point that
XIE is no more.
it sometime.
The big problem is that these images are stored so they can be quickly blitted to the screen. What graphics cards do you know that support lossless compression on images. Many would support JPEG/MPEG style, but lossless compression is a pain. Especially when you're going to want to read individual pixels from it (consider if a window partially overlaps the image you want to display).Why not have X-Windows store the COMPRESSED image?
> Personally I think more use should be made of the XShm extension, Why not have X-Windows store the COMPRESSED image?
> when the client stores the images in shared memory and the X server
> can see them. One thing graphics processors can do is blit images
> and convert colour and layout on the fly.
The X client memory is main RAM on the machine, X server memory
is (partly) memory on the graphics card. I.e. at least the graphics
needs to be copied from the client SHM memory to the gfx card memory
before blitting on screen. However, I don't think this is an issue
compared to the issue of app keeping images redundantly in memory and
making the whole machine swap...
Probably one reason why apps store so much images to the X server
is that this way the images can be blit much faster when the app
is using X server remotely. Faster networks make copying image data
from the remote client to X server less of an issue but maybe with
server side images there's less latency? (which is the real killer
of UI responsiveness with remote X)
> I think what would really help is a way to see what pixmaps
> (and other resources) are currently stored in the server.
> That would make it easier to see if anything is being leaked.
There is already a such thing. There's a patch to the X resource
extension and simple utility for showing the listed pixmaps.
It was mentioned in one of the optimization blogs I think.
Btw. For measuring application response times (e.g. to see whether
performance optimization had an effect), there's a tool called
"xresponse" (it outputs timestamped "X damage" i.e. screen updates
events).
There's no such thing as X-Windows. There's X (full name: The X Window System) and there's Windows (Microsoft).
Why not have X-Windows store the COMPRESSED image?
Great. You're pedantically correct. Why not have X-Windows store the COMPRESSED image?
KHTML is light and easy, which is why Nokia was working on it a little Interview: Jim Gettys (Part I)
while ago :)
thank anyone contributing to it.
Fascinating stuff! I'd like to see more interviews like this. If they can get the mesh networking to work with a long battery life, all kinds of interesting social effects will start happening once this gets into enough people's hands. (and I want one of course!)Interview: Jim Gettys (Part I)
I wonder if they've considered making these available to the developed world at inflated prices, to subsidize the developing world machines. It would help the economy of scale, too. As long as they keep with the priority being the developing world (i.e., they ignore requests for features that would be useless to the kids in the village), it might be helpful. (Maybe?Interview: Jim Gettys (Part I)
I HOPE so.Interview: Jim Gettys (Part I)
I'm thinkin' a six pack o' them gadgets would be a great way to do a first compile-farm project.Interview: Jim Gettys (Part I)
A commercial version has certainly been discussed.
Interview: Jim Gettys (Part I)
Interview: Jim Gettys (Part I)
wonder if they've considered making these available to the developed world at inflated prices, to subsidize the developing world machines.
Since late last year there's been an effort to drum up support for doing exactly that. Bear in mind this has no official involvement with OLPC; it's just an attempt to convince them that there's enough of a market to make them available.
Convincing isn't the issue.Interview: Jim Gettys (Part I)
others have done on our behalf is sufficient.
Regards,
- Jim
Yes, there is already such a web site.Interview: Jim Gettys (Part I)
Regards,
- Jim
I have a laptop with 128MB RAM and Firefox in completely unusable in every way. What were they thinking? Why not go with KHTML? Or some embedded browser like Dillo?Firefox?
Presumably they want to use OLPC as a driver to get Firefox and Evolution Firefox?
slimmed down and suitable for low memory devices. Sorrow and weep, RAM
vendors of the world! They could just consider existing small
footprint solutions such as KHTML or KOffice from the KDE stable; other
alternatives exist.
Firefox yes,Firefox?
considered.
Regards,
- Jim
If your going to run firefox then thunderbird may be a good choice for email and such.Firefox?
xulrunner cuts down disk space requirements, but it'll only cut memory usage down if you have both running at once.Firefox?
Firefox are you entirely on the level here Jimbo?Firefox?
I feel your pain. I have a 128MB machine which was running fine on Debian Sarge. I've upgraded some stuff to see if there have been some worthwhile improvements. Instead I get around the same features but now it swaps all the time. The phrase "hemorrhaging memory" really hit home. I honestly don't know what could take 118MB of memory just to have a few tabs open.Firefox?
> The Marvell wireless chip, which has an ARM 9 and 92K of RAM, can forward packets in the mesh network while the processor is suspended to RAM. This capability has been demonstrated in the lab, and Michail Bletsas is confident of the outcome; in fact, it was an actual demonstration that convinced us to use Marvell. Other wireless vendors lack this capability. Interview: Jim Gettys (Part I)
http://prism54.org/freemac.html