Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

What is an IRQ?

Status
Not open for further replies.

Engneer

Instructor
Jan 14, 2006
6
US
I have seen so many definitions, and all of them wrong, that I wonder what exactly is being taught at schools around the world.

IRQ=Interrupt Request Queue

It is a software stack where all interrupts are queued for processing, usually by the I/O, but sometimes with processor intervention.

On the old Interrupt Buss there were 16 physically unique interrupt request lines [IRL's]. However, because most systems were 4x4, four Input/Output Modules [IOM's] and four central processing modules [CPM's], this resulted in 64 physically unique, separate, and simultaneously accessible interrupts.

Intel and AMD have come up with a lot of schemes, but even the latest PCI-X is lame, using 16 interrupts, with 4 additional for control.

So, your interrupts are 0-15, and neither Intel nor AMD have ever gotten around to what was intended, 64 interrupts that are physically unique and accessible simultaneously.

Secondly, neither Intel nor AMD know what the acronum IRQ means. They keep referring to it as if this was the interrupt buss, it is not, it is a stack within the IOCB [Input/Output Control Block].

And a stack is data, not hardware.

When you try to equate hardware to software, there are problems.

Plug and Play uses the programmable interrupt controller [PIC] and the Advanced Programmable Interrupt Controller [APIC] to route and translate interrupt requests for devices back and forth from the I/O to such devices. As such, PIC and APIC are Host Controllers.

The PCI Controller is actually the PCI Host Controller and the SCSI Controller is actually the SCSI Host Controller. Both devices between the I/O and the peripherals [devices].

The "port" is an offset address to the IOCW [Input/Output Control Word] which is the same as the IOCB.

The IOCB is the block that is on the stack for every Interrupt Request in the Interrupt Request Queue.

These can build to easily 100,000 IRQ's and more.

But the real problem with all devices and drivers is not the IRQ nor the IRL, it is the complete misunderstanding of how interrupts are supposed to be generated and handled, and the false assumption that modern computers can accomplish management of interrupts with software emulation.

Obviously, if you cannot access the APIC to set the Interrupt Number, but are left at the mercy of software developers who have no idea of how interrupts work, neither the software nor the device is going to work.

Add to that each device now is starting to have its own controller, it's own APIC complete with its own BIOS and the confusion becomes a nightmare for adding a simple device, such as a DVD.

For all the professors and Intel and AMD's engineers: go look up the design of mainframe interrupts and the interrupt buss and interrupt handling, you've got it all wrong.

from the designer of interrupt busses long before the PC existed.
 
I am not sure about what you are trying to prove here. The world of problems revolving around IRQs in PCs is tied to backwards compatibility, back to the first 8259 devices made by Intel in the 70's. The IOAPIC needed many years before starting to gain acceptance, and it is only because there were just not enough interrupt lines available for all the devices in a PC. Redesigning the whole interrupt structure is not thinkable in a PC context. There will never be an "interrupt bus" in the PC architecture. Instead, more devices will share interrupts by using interfaces like USB whose controller handles the priorities by software, leaving the hardware interrupt lines to the devices that require time-critical response.

I am curious see if the new Intel MAC uses the same chipsets and architecture. It probably does, because nobody can start a new computer architecture on its own. The machines are too complex now. Even PC motherboard manufacturers do not know about the internals of the parts that they are using. Lucky you if you have been able to design your own system.

The PC is slowly becoming what it was meant to be, a medium for information. The hardware will become a commodity, and the application software will be the differentiator.


 
I will beg to remind you that it has never been the intent to recreate "Iron" on the desktop. They are just toys & not real computers.

I do not portend to diminish your contribution to "major iron" architecture. I do agree that many drivers(dll or otherwise) are poorly constructed, but I am not about to write my own.

We are in the 21st century and 'digital iron' is going the way of the 'analog computer'.

The PC has carved a place as a major contributor to business from its' toy beginnings. It will progress, albeit slowly owing to the need for backward compatibility.

To try to do what I conclude you are suggesting would require a clean slate, and that will just have to be step wise at this juncture.

Reflect back on the trials & tribs that MS went through to drop their GUI on top of DOS, Same reasons, and it just took time. I may not be around to see it but the PC or whatever it ends up being called will eventually be something to admire.

rvnguy
"I know everything..I just can't remember it all
 
The PowerPC was supposed to free the world from all the constraints of the PC. The Itanium was supposed to be the next big thing in microprocessors. Millions if not billions have been spent on these architectures. Spoiled money? Yes and no. Some saw a business opportunity, some saw an occasion to build a product that might evolve better. Humans are not perfect creatures, but it happened that they started to evolve in a time frame that was favorable to them. What would be the present world if say IBM had chosen the orthogonal Motorola 6809 or the 68000 instead of the 8088 for the first PC. No one could tell.


 
A better world. But I'm biased based on experience.
Supposedly the 6809 was under consideration to be the AT. And their 68K lab machine went away because of a lack of support. And at the time I think Motorola was spending all their effort getting more chips in cars.


Ed Fair
Give the wrong symptoms, get the wrong solutions.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top