Path: csiph.com!usenet.pasdenom.info!gegeweb.org!de-l.enfer-du-nord.net!feeder2.enfer-du-nord.net!fu-berlin.de!uni-berlin.de!individual.net!not-for-mail From: Andrew Reilly Newsgroups: comp.arch Subject: Re: M68k add to memory is not a mistake any more Date: 28 Mar 2012 22:29:43 GMT Lines: 58 Message-ID: <9thhmnFdvbU1@mid.individual.net> References: <415d56a1-a8bb-49cd-9b3c-0d53b2a9e171@gr6g2000vbb.googlegroups.com> <22689371.4694.1330447517208.JavaMail.geo-discussion-forums@ynkz21> <581ed8f5-b08f-4657-ba02-f7fc36e8bfda@q12g2000yqg.googlegroups.com> <23c57d6f-9b53-4336-9ec6-4b9d1a401f91@t16g2000yqt.googlegroups.com> <9tev27Fie3U1@mid.individual.net> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Trace: individual.net PAnhJwjH0lNRb6iTdyAGMgU8Mh2xN7b9sP565x5bptkfOCPkRL Cancel-Lock: sha1:40GcXcK/E7JXqu5XHzHLKQyXyP4= User-Agent: Pan/0.135 (Tomorrow I'll Wake Up and Scald Myself with Tea; GIT 30dc37b master) Xref: csiph.com comp.arch:6399 On Wed, 28 Mar 2012 18:09:27 +0000, ChrisQ wrote: > Every cpu will have a data sheet definition of what happens when an > interrupt is taken. The general case is that the current instruction, > prior to interrupt, > is guaranteed to complete and current state saved before the interrupt > is taken. > You then have a known state in terms of the instruction sequence, which > can be re-entered once the interrupt handler is complete. Assuming > registers are saved within the handler, how can the interrupt handler > code interact with any thread, other than by intent ? Well, Nick has pointed out that there's a bunch of extra-CPU state that isn't even necessarily knowable by the author of a particular interrupt handler, that can get in the way, but that wasn't the point I was trying to make. The problem is that interrupt handler code *necessarily* interacts with other threads, by intent. If there was no interaction, you could just as well ignore the interrupt altogether. Ultimately, if the interrupt is to have any observable effect at all, then it must do something that running processes or the kernel (which is just another running process as far as the interrupt model is concerned) can see the effect of. This has always been pretty manageable on a platform-by-platform basis, of course. If you arrange that the only process that interrupts "intentionally interact with" is the OS kernel, and you control that by diligent use of interrupt disables around critical sections, then you could be relatively OK. Modern systems seem to have largely eschewed that approach (because blocking interrupts doesn't work so well on multi- processor systems), in favour of DMA hand-off and careful use of atomic operations and lock-free data structures, but those atomic operations have only been introduced and standardised in programming languages in the last several years, and the OSes we all use have code in them dating back thirty. There are almost certainly still issues, somewhere. > Threads, sequence > points and other high level language constructs mean little to a cpu > running machine code, unless the machine is designed to be language > aware. Exactly. Which is why the interrupt handler might do the wrong thing to a piece of program state that has been hoisted into registers by an enthusiastic compiler. It can't know that the register state that it just diligently saved shadows the result data structure that it is about to update. > If the discussion is really about dodgy system services, then fair > enough, but you can't blame the interrupt model for that. SMP vs critical-section protected data structures. A carefully designed "interrupt thread" OS model is probably the safest approach, and I'm glad to see that it also seems to be the most popular, on modern systems. Cheers, -- Andrew