Archive for the ‘QNX’ Tag

I’ve been workin’ on the WebKit all the live long day …

WebKit for Neutrino 6.4.0 that is

Lots has gone on since my last post on SRR.  After finishing up my parental leave last year  I decided that after working at QNX for nearly 10 years it was time for a change and took on a role at Crank Software.

We’re doing a lot of interesting things at Crank, mostly to do with graphics and improving the way people integrate graphical content into their embedded products. 

Crank is doing a lot of work with customers who base their user interface designs on what Apple’s iPhone and iTouchcan do, but want to do it on systems with less powerful CPU’s, smaller memory capacity and less capable graphics engines.

Oh yeah … and for those devices that are network connected they also want to run WebKit, the engine under Apple’s Safari web browser

As part of addressing these needs (better, faster, smaller) Crank has ported WebKit to QNX Neutrino, and since web browsers and graphical applications go hand in hand these days, we plan to provide assistance and support on this technology. 

If you are keen, you can try out an advance pre-release version of the WebKit engine by downloading it from

This is a snapshot of our development from a month ago after we got the initial port running and passing the basic browser tests, lots of improvements have been made since then and we’ll update it when we hit a good stability point.

Our initial ports run on both Neutrino 6.3.2 and 6.4.0.  We currently have Photon microGUI versions and expect to have an AGTDK based version out in the coming weeks.

The development on WebKit is very active, with lots of work going on in all areas from user interface to scalability to scripting performance.  Getting the initial port up and running wasn’t a trivial amount of work, and we hope to work with the WebKit developers to back port our changes. 

In the mean time, if you run Neutrino and wanted to be like all the other cool kids out there running WebKit go check out the pre-release and let us know how it works for you! 



No fault of your own…

If you’ve ever taken a kernel trace of an application starting up on a kernel that is 6.3.2 or later, you might have noticed a thread state in your application called STATE_WAITPAGE.

To understand what is going on here, we have to first look at mmap() and how it allocates and initializes memory.

When you allocate memory with mmap(), it first allocates a virtual address range, and then (unless you’ve passed MAP_LAZY), it will allocate the physical memory needed to back that object.

What it doesn’t do, though is setup all the page table mappings for the mapping (although in actual fact it will setup a certain amount, depending on heuristics based on the type of mapping).

In actual fact it will wait until the program first accesses a page before setting up a page table entry for it.

This is a change from pre-6.3.2 kernels, where all mappings were setup immediately. You can re-enable the old behaviour by using the procnto option -mL in your buildfile.

The benefits of this scheme, known as demand paging, are a speedup when the application doesn’t actually access all of the pages in a mapping. This is quite useful when mapping executable objects, since often there are quite a few pages in the text section which may never be referenced.

In some patterns of usage, though, this can induce a performance penalty, especially when there are many threads running at the same priority.

This brings us back to STATE_WAITPAGE. When a process is executing and it access a page that has not been mapped, or tries to write to a page that is marked read-only, it will induce a page fault. This is caught by the kernel.

The fast path in the kernel will peek at the processes pagetables, and if found it will bang it into the TLB (this is done in hardware on some architectures, such as the x86). If it can’t find it then it needs to defer the work of handling the fault to the process manager, since it may need to do some complex work such as communicating with device drivers, and also the also the necessary structures may not be accessible/consistent.

This, then, is where the thread gets blocked in STATE_WAITPAGE, and a pulse is sent to procnto, to wake up a thread to handle the request.

The procnto worker does the dirty work of initializing the memory (it may need to read a page from a disk driver for example) and setting up the pagetables.

The first time a page is read it is setup with a read-only mapping, even if the mapping is writeable, unless the underlying page has already been marked as modified. For MAP_LAZY mappings, this is the point at which physical memory is actually allocated for the page. If the allocation fails, then the application will have a SIGBUS signal delivered to it (this was the same in pre 6.3.2 kernels).

This delayed initialisation supports two handy schemes.

This first is Copy On Write semantics (COW). This is where we don’t bother making a private copy of a page that was originally shared with another mapping until the page is modified.

The second is support for writeable mappings to files. Prior to 6.3.2 you could only map a file readonly, or with the extra flag, MAP_NOSYNCFILE. This was because there was no tracking of modified pages.

When you call msync() on a shared mapping to a file, all the modified pages are written to the backing store. Now the pages can be marked read-only again, and the modified indicator can be turned off.

Now all this is great, but what about that performance penalty? Well all that context switch can make the page fault processing take quite a while if there are lots of threads in STATE_READY at the same priority. The procnto thread will be placed at the back of the queue, wait for the others to run, the it will run, potentially talk to a device driver, and then make the original thread ready again, which will be placed at the end of the queue.

Another thing to think of in some circumstances you want to take the hit of talking to the backing store all at once for determinism reasons.

A way to control this behaviour is via the mlock() and mlockall() functions. These tell procnto that you want the some (or all) pages made memory resident right now.

This means that the mapping will have been fully read in from disk by the time the mmap() call returns to your program.

You still get page faults on the first write to a page, though. In that case we setup a read-only mapping (or if the underlying pages have already been modified, a read-write mapping).

If you TRULY don’t want page faults for mappings, then there are two options open to you. Since device drivers may have their code run in an interrupt handler, and they may also be running code with interrupt disabled, they can’t take page faults at all. So when you request IO privity with the ThreadCtl() call, you get marked as being super locked. This means that all mappings are setup and ready to go.

Of course, only processes running with superuser privilege can request this special status. To have all processes be marked as super locked, you can pass the -mL option to procnto. To have all processes simply be locked (the equivalent of calling mlockall(MCL_FUTURE|MCL_CURRENT), you can pass the -ml option.

Well, this post has gone all over the map <groan>, so I’m going to sign off.



Mutex madness?

I recently received an interesting kernel trace from a customer. It showed several threads going from STATE_RUNNING to STATE_MUTEX without an intervening SyncMutexLock kernel call. He wondered if perhaps the trace was corrupted.

After looking at the trace it seemed to be sane, but the behaviour shown was somewhat puzzling. However it is explainable, and it has to do with our implementation of mutexes.

Continue reading