Originally Published: Monday, 30 April 2001 Author: Cherry George Mathew
Published to: featured_articles/Featured Articles Page: 1/1 - [Std View]

Creating a Kernel Driver for the PC Speaker

Take a jaunty look at the basics of driver development as Cherry George Mathew guides us through the process of creating a driver for the PC speaker under Linux.

I decided to make the PC speaker (That invisible little thing under the PC hood that beeps) spew rock music. I wanted it to play real music. I couldn't bother to write all the code from scratch, to decode MP3, etc. So I got a little lazy and decided to play games with the most documented, most sophisticated OS ever: Linux.

How do you go about it, when you are new to a city and want to get from point 'a' to 'b' with confidence? I'd take a deep breath and start walking downtown. I'd start gleaning information and discovering places. Foolhardy? Just try it in the city of the Linux kernel. It's a maze of code, a dizzy labyrinth of cross-linked directories and makefiles. When I started off, I knew that this was going to be a long project. But I had the 'staff' of DOS and the 'rod' of "Peter Norton's guide to the IBM PC and Compatibles" with me to lend me courage. So off I strode with my chin in the air, and ready to take on the worst d(a)emons.

The PC-speaker driver patches that are available for download from MITs FTP site require you to have the kernel recompiled if they are to work. When I started off, I was ready to play foul, and get quick results. That's why you'll find inline assembly in the code. Anyways, here is how it works...

The PC speaker: A backgrounder.

The internal speaker is tied to the buffered output of the 8254 timer chip on all PCs. The output of the 8254 timer is further latched through the integrated system peripheral chip, through port 61h. A little ASCII art should help, I think. Here goes:

PIC stands for programmable interrupt controller

The base clock frequency of the 8254 is 1193180Hz which is 1/4 the standard NTSC frequency, incidentally. The counters have the values of the divisors, which, roughly speaking, are used to divide the base frequency. Thus the output of channel 0 will be at a frequency of 1193180Hz if counter0=1, 596590Hz if counter0=2 and so on. Therefore counter0=0 => a frequency of approximately 18.2 Hz, which is precisely the frequency at which the PIC is made to interrupt the processor. In DOS, the PIC is programmed to call the Interrupt Service Routine (ISR), at vector 8.

Effectively this means that the value of counter0 will determine the frequency of the timer ISR (Vector 8 in DOS) is called. Changing counter 0 changes the rate at which the timer ISR is called. Therefore if the same person wrote both the code for the ISR, and that for programming counter 0 of the 8254 timer chip, then he could get his ISR called at a predetermined rate as required.

All this is leading to another aside.

DigitalAudio, an aside.

When you hear sound, you know something near you is vibrating. If that something is a speaker cone, you know immediately that there is an electrical signal driving it. So we could always grab the signal generator by the scruff, if we want to snuff out the noise. If we want audio, we need a vibrating, or alternating, voltage. And we know that digital implies numbers, 1s and 0s. How do we put all of this stuff together and create digital audio?

Lets imagine that we want a continuous hum to wake us out of slumber in the morning. Bless the man who tries to sell this gadget to me! We need a continuous sine wave. Something like:

The numbers represent how loud the noise gets at every instant. You're involuntarily doing DSP here. DSP is a year two pain-in-the-neck paper for most Electrical Engineering undergraduates. (I'm one of them. Accept my hearty condolences.) So I better mention that you're actually looking at samples. These values are all you need to recreate the wave we need. Do that continuously, and you have a continuous wave. So if we ran through the numbers starting at 1 through 7 through 0 through -1 through -7 through -1 to 0, all in a second, we'd get a very approximate sine wave at 1Hz. (Remember, Hertz is cycles per second.) Got the mechanics of the thing? Want a sine wave with a smoother curve? Just increase the number of samples you take per second. Here we've done 14. How about if it were 44000? That's the rate a CD player spews the numbers out to its DAC. DAC stands for Digital to Analog Converter, it's the little gadget that converts the 1s and 0s that make up the binary numbers that we are talking about into real analog time-varying voltage. Our little coding technique is called pulse code modulation. There are different ways to code the pulses, so we have PCM, ADPCM etc. The waveform above could be termed "4bit, signed mono PCM at 14Hz" sampling rate.

1 Bit DAC

So you ask me, where does all this come in when we're talking about the PC speaker? How about a custom timer ISR to vibrate the speaker cone at a pre-requisite frequency, so that all the ISR programmer has to do is to make the PC speaker cone move to the required amplitude (distance from the zero line) according to the sample value he gets from digital data, from a CDROM, for example. This means that we can set up a timer ISR for 44000Hz, and that is CD quality music staring at us! Perfect logic if you have a DAC to convert every sample into the corresponding analog voltage. In fact, the parallel port DAC driver does just that. Just rig a R - 2R ladder network of resistors and tie a capacitor across the output, feed it to any amplifier, even a microphone input will do, and voila, you have digital music!

Alas, things are not all that simple with the PC speaker. All because the PC speaker is not at all tied to a DAC, but of all things, to a timer chip. Take look at the waveform output of a timer chip for, say, a sine wave:

We have two discrete values to play around with: One +5V, the other 0V and nothing in between. How do we get the Analog waveform? Oh man, why hast thou asked the impossible? Ask the designers at IBM who designed the first XT mother boards!

But we do have a very fragile, subtle, solution. The techie terms are 1bit DAC, Chopping, and so on and so forth.

It's rather simple and easy to implement, and somewhere down the line, it was bound to happen. I doubt that the old XT bugger at IBM ever dreamt of 1.5 GHz Pentiums when he fixed his 8086 on the motherboard for the first time.

The idea is to drive the PC speaker cone in bursts, when we can't quite push it up to the mark smoothly. Mind you, at 22Khz the cone is a mighty lazy bloke, it reluctantly moves up to the mark. Halfway through, take a rest so that if it's overdone and the cone has overshot, it gets time to come back down. Something like antilock brakes in automobiles. When you press the brake pedal half way down, the mechanism starts alternately pushing the brakes on and off. When you're standing on the pedal, the brake shoes are not quite stuck to the wheel drum, they're hammering at a furious pace. So you don't get a locked wheel. Similarly the more frequently you hammer the speaker cone with a +5V pulse, the farther it moves from the centerline. Bingo! Vary the frequency of pulsing according to the required amplitude. I named the DOS version fm.com just to remind myself that the idea was indeed ingenuous.

Linux here we Come.

The Linux kernel is an amazing piece of programming in that it has been organized so well that a person with little or no knowledge of assembly language can write a lot of kernel code (in fact 99% of the kernel is written in "c"). It is also designed in such a way that device driver writers are given a preset environment and an elegantly exhaustive programming interface to write code.

The kernel code is very portable, i.e., it can be compiled on a variety of machines (processors like the i86, alpha, sparc). I think it makes good logic to write code templates, which can be elaborated and tailor made for individual hardware. In English, I could best illustrate this principle with an example. Suppose that you want to publish a Phd thesis on how to wash clothes using your brand of washing machine. You'd write a sequence of steps starting from:

1) Insert the power cord into the wall socket and switch on the power


n) Finally, retrieve your garments from the soaking mess and dump them on the clothesline.

The sequence from 1 to n would take minor variations depending on whether your washing machine was semi or fully automatic, whether it was top or side loading (try step 'n' from the side loading washing machine, and send me an e-mail about it) and other variables. The instructions in your thesis would be fine for one washing machine, but how about if you were a freelance user manual writer, and needed to write manuals for a thousand brands?

Take the case of the /dev/dsp device interface, the default interface for PCM (pulse code modulated) and coded PCM sound: Hannu Solvelein with Alan Cox making significant contributions designed much of the interface. But these designers, quite rightly, didn't make room for one teeny-weeny little device called the PC-speaker, in favor of the AWE 64 and cards of the kind. They assumed that all DSP devices would at least have DMA support, or on board buffers, if not coprocessors (ie on-board processors). So they put the DMA registration code as a mandatory part of the OSS API. That's where we begin the real hacking. We have to avoid the standard OSS interface, and rebuild it from scratch. Which means it's time for another technical discussion - character devices in Linux.

Character Devices in Linux

In Linux, there are mainly two kinds of devices: Block and Character. (Ignoring network devices as they're not really "devices", more like interfaces.)

Block devices are assumed to have certain characteristics like reading and writing in blocks, buffering, partitioning etc. The hard disk drive is the perfect example of a block device. An application normally accesses a hard drive through a file system driver. That's why in Unix you mount disk drives and do not access them sector-by-sector.

Character devices are meant to be read and written one byte at a time (e.g.: Serial port), and are not buffered. An application accesses them by doing ordinary file control operations on the corresponding device nodes. Device nodes are special "files" which can be accessed through the ordinary path tree. So if you want to write to the sound device, by convention, /dev/dsp is the published device node to use for that. Note that any device node that points to the corresponding device number registered by the driver can be used to access that driver. For example, the /dev/dsp node is attached to device number 14/3. (try: file /dev/dsp; on your system). You could equally well access it via /dev/mynode if /dev/mynode points to 14/3. Check the mknod man pages for exact semantics.

Now if you have a .wav file which is of a specific format, say 16-bit, Stereo, raw pcm, to make it play on the system sound device, you might open the /dev/dsp node using the open system call, and open your .wav file, read a block of data from the .wav file, and write it to the /dev/dsp node using read and write system calls respectively. AHA! And guess what client is readily available for this? Our very own cp. So next time, try cp -f fart.wav /dev/dsp. And tell me how it sounded. I'll bet that unless you're very lucky, you wouldn't get the right sound even if you play Celine Dione. That's because the sound driver needs to be told what format the raw data it gets is in. More often than not, you'd be trying to play a 16-bit stereo file at 44khz on a 8-bit mono 22khz driver. That's like trying to play an LP disc at the wrong turntable speed.

The ioctl (short for input/output control) system call is used on /dev/dsp, to talk to the device driver. Unfortunately, the exact semantics of the ioctl call is left to the device driver writer's discretion. That's sort of like the chaos one gets in the DOS software market. Thankfully, we have a few recognized conventions in Linux, the most popular of which is the OSS or Open Sound System. This is the interface implemented in Linux by Sovelein and co. So we have XMMS plug-ins for OSS on the application side, and scores of device drivers on the kernel side.

The Kernel

When an application makes the open "call" it's obviously calling something. That something is a kernel routine (remember open is a system call). The kernel is designed to pass on the call to the corresponding device driver. The amazingly nice thing about the Linux kernel is that you can tell the kernel to call your routine for a particular device number. This is called device callback registration, and is a kernel mode call, i.e., you cannot write applications that do these calls and can be run from the terminal. You use insmod for that, and write a special program called a kernel module, which insmod can load into kernel space and link with the kernel system calls. The main difference between system calls and kernel mode calls is that system calls have to conform to general conventions if they are ever to be recognized as a part of Unix. The kernel, on the other hand, is Linux. So it's just Linux conventions one has to follow in kernel programming, and mind you, Linux kernel conventions change nine to the dozen per kernel release. That's why you have a "pre-x.xx.xx version compile only "warning with many releases of module binaries. At a minimum, we need to have a read and write callback routine each, besides open and close. The Linux kernel specifies a routine called init_module, and another called cleanup_module, which are called by the kernel at the insertion and removal of our module. (Somewhat like main() in user space.) In other words, when we write a init_module routine, we assume that we have full control over the system ports, memory, etc. and that we can call all available kernel functions.

A Working Device Driver

For us the main job is to get a working device driver that can access the PC-speaker through the 8254 timer ports, and do the tricks that'll copy the application's sound data to the PC-speaker, byte-by-byte.

We'll create a device node called /dev/pcspeaker, and a driver called myaudio.o, which can be loaded into the running kernel using insmod myaudio.o, and removed using rmmod myaudio.

Let's take a look at the program structure. We have the following tasks to do:

1) Register a character device. 2) Hook the timer interrupt vector and set the interrupt at the correct sampling rate. 3) Print a message that says Phew! The kernel will tell you if anything went wrong. In many cases, it'll reboot the system for you.

When the device is unloaded, we need to restore the system to its previous state by the following steps:

4) Unhook the timer interrupt vector and reset the interrupt to the old rate. 5) Unregister the character device. 6) Print a success message.

A Look at myhandler()

The sample code is in two files called myaudio.c and myaudio.h myaudio.c contains the device registration routines that do all the above tasks. myaudio.h contains a very important routine called the ISR (Interrupt Service Routine). It's named myhandler() I think that the steps given above are explained by reading the code in myaudio.c Let me turn your attention to myaudio.h, to myhandler() in particular.

Step number 2, above, says: "hook the timer interrupt vector". This means that the ISR is to be setup in such a way as to get executed at exactly the sampling rate we intend. This means that when I write the code in the ISR, I can be reasonably sure of the following: a) The next datum from the user application, if available, is to be fetched, b) It needs to be processed into an 8254 counter 2 value (discussed in detail above), c) This counter value is to be dumped into the 8254 counter 2 register: i.e. delay for the PC-speaker is set according to the value of the fetched datum and d) The system scheduler has not yet been called! Decide whether to call it.

Step d) needs an aside:

If you've read through setvect() in myaudio.c, you'll find that setvect uses a few gimmicks to put myhandler into the system vector table. This is Intel 386+ specific. In the real mode of 8086 operation, all one needs to do to revector an ISR is to save the corresponding entry in the interrupt vector table (IVT) that starts at memory address 0000:0000 in increments of 4 bytes. (Because a fully qualified long pointer in the 8086 is 32bits long cs:ip.) In other words, for interrupt 8, which is the default BIOS setting for IRQ 7, of the PIC, just change the pointer value at 0000:0020 to the full address of myhandler().Things are a little more complicated here. In 386+ protected mode, in which the Linux kernel runs, the processor, the IVT is called the IDT, or the Interrupt Descriptor Table. A meaningful description of the IDT would take a whole HOWTO, but I'll assume that you know about 386+ protected mode if you want to and save all the gory details for your Phd thesis. What we really need to know is that the pointer to myhandler is scattered over an 8-byte area. That information is put together using some cute GNU assembler statements to make the original ISR memory pointer that actually points to the system SCHEDULER, which is a special program routine in every multitasking operating system. The responsibility of the SCHEDULER is to pluck control from one program when its time slice is over, and give control to the next. This is called pre-emptive multitasking. In Linux, the time slice given to a process is 10 milliseconds. Can you guess the rate at which the default timer ISR is called? It's a value called HZ, in the Linux kernel.

The catch here is that while the original ISR (the scheduler) needs to be called at 100Hz, or HX, our ISR requires calling at the sampling rate, usually 22Khz. And if we neglect to call the original ISR, all hell's going to break loose. There's a simple solution waiting. If you know the rate at which you're called, and the rate at which to call the original ISR, just call it once every so many times. In other words: At 22Khz, increment a counter at everytick and when the counter reaches 220, call the old ISR, otherwise, send an EOI (End Of Interrupt) to the PIC. Thus the old ISR gets called at exactly 100Hz! Black Magic!! If you forget to compensate for the rates, it's very interesting to observe what happens. Just try it. On my system, the minute needle of xclock was spinning like a roulette wheel!

If you take a look at the figure above, the one which shows how the 8254 timer interrupt is hooked to the PIC, you'll notice that when the 8254 wants to interrupt, it tells the PIC, via the IRQ 7 line, (which incidentally is just a piece of copper wire embedded on the motherboard). Nowadays of course, a number of the older chips are merged into one package so don't go around snooping for a PCB trace labeled IRQ 7 on your motherboard! The PIC decides whether, and when to interrupt the processor. This is a standard method to share interrupts, and is called interrupt priority resolution (or prioritization), which is the sole purpose of the PIC. In Linux, the PIC is reprogrammed to call vector 20 for the timer ISR (IRQ 7) as against the vector 8 setting of the BIOS in DOS. After every interrupt, the corresponding ISR is expected to do an EOI, which is essentially an outportb(0x20,0x20). So a little care is needed to make sure you don't send a double EOI, one from you, and one from the original ISR which doesn't know about you.

One Last Point

I guess, that's it, but before I run away satisfied that I've shoved Latin up a thousand throats, I want to clear a few things about the sample that I did in myaudio.x . Linux has a formal method to claim interrupts for device drivers. Trouble is, by the time a module is loaded the scheduler has already claimed the timer interrupt. So we have to hack a bit and steal it from the scheduler. That's why we had to discuss IDTs and stuff. I haven't implemented the OSS interface even though I elaborated it on a TODO list earlier. Nevertheless it's possible to listen to MP3 music with the following:

As root user, chdir to the directory where you've copied the source files myaudio.c, myaudio.h, and myaudio.mak. make -f myaudio.mak do a mknod c 254 /dev/pcspeaker cat /proc/devices

If you see an "Internal Speaker" entry, we're ready for some fun. Else forget about the whole thing.

insmod myaudio.o

mpg123 -m -r 22000 --8bit -w /dev/pcspeaker x.mp3 #this should playx.mp3


My name is Cherry George Mathew. I'm a third year Electronics Engineering Under graduate student at College of Engineering, Adoor, Kerala, India. Any questions about this article may be sent to berry.plum@mailcity.com, which time permitting, I will try to answer.

Note: All these operations need to be done as user root. So I'm assuming that you're using your own machine, and are ready to trash any part of it, maybe permanently. I cannot take any responsibility for what happened to your system because of my code, so I'll insist that you try it only at your own risk.


#include <linux/module.h>
#include <linux/version.h>
#include <linux/linkage.h>
#include <linux/malloc.h>
#include <linux/soundcard.h>
#include <asm/io.h>
#include <asm/uaccess.h>
/* #include linux/kernel.h> */
/* #include include/vmalloc.h> */

/* Among many hazy assumptions, we decide that 
- gcc compiler specific operand passing assumption for type 
  'long', calling proc cleans up stack
- Stack is push down, ie, grows to lower addresses
- 32bit far return does popl eip, popl cs(msw neglected) in that order
- 'leave' cleans up the stack...........................???
for a baby of an interrupt handler routine.......               */

/*****************helper functions in use***************/
asmlinkage void myhandler(void);
static int fops_sync_output(struct inode *inode, struct file *file);
void sync_output();
long getvect(long vector_number);
long setvect(long vector_number, long new_handler_address);
int helper_get_pit_count(void);
void helper_set_pit_count(int count);
static void speaker_close(struct inode *inode, struct file *file);
static int speaker_open(struct inode *inode, struct file *file);

static ssize_t speaker_read(struct file *file, char *buf,
                size_t count, loff_t * ppos);

static ssize_t speaker_write(struct file *file, const char *buf,
                 size_t count, loff_t * ppos);
static int speaker_ioctl(struct inode *inode, struct file *file,
             unsigned int cmd, unsigned long arg);

/********************GLOBAL STRUCTURES ********************/
struct file_operations speaker_fops = {
    NULL,           /* we'll support speaker_ioctl later, 

static char berio;
static long timer_idt_ptr, data_buffer, data_ptr;
static int sampling_factor, canplay, pit_counter, compensation_count,
    temp_pit_counter, old_pit_counter, buffer_length, virgin,
    device_major, no_sleep_just_try_again;

struct __stack_pad_tag {
    long __stack_padding1, eax, __stack_padding2;

long setvect(long vector, long hook)

    struct __stack_pad_tag stack_pad __attribute__ ((packed));
    asm("sidt %3                    \n      
    leal % 3, %%eax \ n
    movl 2(%%eax), %%eax \ n leal(%%eax, %1, 8), %%ecx \ n  /* ecx 
    movl 4(%%ecx), %%eax \ n
    xorw %% ax, %%ax \ n movw(%%ecx), %%ax \ n
    /* Remember , pointer to int handler is in eax, we've not
       pushed it */
    cli \ n
    movw %% bx, (%%ecx) \ n
    shrl $16, %%ebx \ n movw %% bx, 6(%%ecx) \ n
    /* I'm under the assumption that eax can safely be returned
       to pseudo register 'stackpad.eax' */
    sti \ n " : " =
    a " (stack_pad.eax) : " c " (vector)," m
    " (stack_pad.__stack_padding1), " m " (stack_pad.eax), " m
    " (stack_pad.__stack_padding2)," b " (hook));

    return stack_pad.eax;}

    long getvect(long vector) {
    long eax, __stack_padding1, __stack_padding2;
    asm("sidt %4                    \n
        leal % 4, %%eax \ n
        movl 2(%%eax), %%eax \ n
        leal(%%eax, %1, 8), %%ecx \ n
        movl 4(%%ecx), %%eax \ n
        xorw %% ax, %%ax \ n
        movw(%%ecx), %%ax \ n " : " =
        a " (eax) : " c " (vector)," m " (__stack_padding1), " m
        " (eax), " m " (__stack_padding2) ); return eax;}

/* A little glitch in the stack padding-not to worry though, 
 * it works just fine...... (%4 vs %3 in setvect */

        asmlinkage void myhandler(void) {
/* pushl %%ebp; movl %%esp,%%ebp */
        asm volatile ("
              pushl timer_idt_ptr
              movw canplay, %cx
              jcxz skip_isr
              inb $0x61, %al orb $3, %al outb % al, $0x61
/* fetch data */
              movl data_ptr, %ebx movw(%ebx), %dx
/* load mode register of pit */
              movb $0xb0, %al
              outb % al, $0x43
              movw pit_counter, %ax
              xorb % dh, %dh
              mulw % dx movb $8, %cl shrw % cl, %ax
/* load counter 2 */
              orw $1, %ax outb % al, $0x42
/* dump MSB into counter 2 assuming it to be zero */
              xorb % al, %al outb % al, $0x42
/* increment data pointer */
/* movb berio, %cl
xorb %ch, %ch
jcxz data_ptrplus
add $1, data_ptr
              data_ptrplus:incl data_ptr
              movl data_ptr, %eax
              subl data_buffer, %eax
              cmpw % ax, buffer_length jnc skip_isr /* We 
              movw $0, canplay  /* So just switch off 
                           playback to
                           indicate end of
                           buffer */
              skip_isr:decw temp_pit_counter
              jz hop_to_kernel movb $0x20, %al  /* EOI 
              outb % al, $0x20 popf popal
/*subl $4, %esp        /* readjust stack for iret */
              pop timer_idt_ptr
              hop_to_kernel:movw compensation_count, %ax
              movw % ax, temp_pit_counter; popf ");
/* return stack has been set up -> oldhandler. End with popa; ret */
              asm("popal ret ");}


#include "myaudio.h"

static int
speaker_ioctl(struct inode *inode, struct file *file,
          unsigned int cmd, unsigned long arg)
    int val;            /* , *arg; */
    struct audio_buf_info _infobuffer = { 1, 1, 32000, 32000 };
    /* we're assuming that this stuff is on a local stack. In that
       case the above initializations assume the same values on each
       call to ioctl....... ie, 1 buffer fragment available, 1 also
       max no: of frags, size in bytes of fragment, amount of
       available mem. */

    switch (cmd) {
    case SNDCTL_DSP_SYNC:   /* reset buffers */
    call sync_output();
    goto default;

    case SNDCTL_DSP_POST:   /* do launch output ->dmap whatever
                   that means */
    case SNDCTL_DSP_RESET:  /* reset dsp */
    call sync_output();
    goto default;

    case SNDCTL_DSP_GETFMTS:    /* return in val,dsp format mask */
    val = AFMT_U8 | AFMT_U16_LE;

    case SNDCTL_DSP_SETFMT: /* set dsp format mask and return val 
    val = get_user((int *) arg);
    if (val & AFMT_U8)
        berio = 0;
    if (val & AFMT_U16_LE) {
        berio = 8;
    } else
        return -EINVAL;

    case SNDCTL_DSP_GETISPACE:  /* output in audio_buf_info
                   (include/soundcard.h) amount of
                   unused internal buffer space -
                   invalid for o/p only device */
    return -EINVAL;

    case SNDCTL_DSP_GETOSPACE:  /* same as above but valid */
    if (canplay) {
        _infobuffer.fragments = 0;
        _infobuffer.bytes = 0;

    else {
        _infobuffer.fragments = 1;
        _infobuffer.bytes = 32000;

    case SNDCTL_DSP_NONBLOCK:   /* set non-blocking flag in the dev
                   structs */
    no_sleep_just_try_again = 1;
    goto default;

    return -EINVAL;
    case SOUND_PCM_WRITE_RATE:  /* returns speed in val in Hz */
    val = case SOUND_PCM_READ_RATE: /* returns read speed in val. 
                       Want to return -EINVAL */
    return -EINVAL;

    case SNDCTL_DSP_STEREO: /* returns and sets the number of
                   channels aske d for (1,2) */
    case SOUND_PCM_WRITE_CHANNELS:  /* returns and sets the
                       number of channels
                       requested for (0,1) */
    case SOUND_PCM_READ_CHANNELS:   /* We return -EINVAL here */
    case SOUND_PCM_READ_BITS:   /* sets and returns bits per sample */
    case SNDCTL_DSP_SETDUPLEX:  /* no need to support */
    case SNDCTL_DSP_PROFILE:    /* no need to support */
    case SNDCTL_DSP_GETODELAY:  /* I think that this is simply the
                   time it take for the damned thing
                   to stop playing when told to stop */

      /********** from dma_ioctl ****************/
    case SNDCTL_DSP_GETOPTR:    /* we need to support this URGENTLY
                   something called a cinfo struct is 
                   set */


    return 0;
    return put_user(val, (int *) arg);

static ssize_t
speaker_write(struct file *file, const char *buf,
          size_t count, loff_t * ppos)

    if (count < 0)
    return -EINVAL;

    if (!count) {       /* Flush output */
    return 0;

    if (!canplay) {
    count = (count <= 32000 ? count : 32000);
    if (copy_from_user((void *) data_buffer, buf, count))
        return -EFAULT;
    data_ptr = data_buffer;
    buffer_length = count;
    canplay = 1;

    return count;

    return 0;           /* The idea is to return 0 bytes and
                   leave the problem to the client */

static ssize_t
speaker_read(struct file *file, char *buf, size_t count,
         loff_t * ppos)
    return -EINVAL;

static int speaker_open(struct inode *inode, struct file *file)
    /* Check for re-entrancy */
    if (virgin >= 1)
    return -EBUSY;
    virgin = 1;
    return 0;


static void speaker_close(struct inode *inode, struct file *file)
    virgin = 0;

static int fops_sync_output(struct inode *inode, struct file *file)
    return 0;

void sync_output()
    canplay = 0;
    data_ptr = data_buffer;
    buffer_length = 0;


int helper_get_pit_count(void)
    long ntsc = 1193180;
    return ntsc / HZ;

void helper_set_pit_count(int counter)
    outb(0x36, 0x43);

    outb((char) (counter & 0xff), 0x40);

    outb((char) (counter >> 8), 0x40);

int init_module(void)

    /* Initailize all global variables */
    if (
    (device_major =
     register_chrdev(0, "Internal Speaker", &speaker_fops)) < 0) {
    printk(KERN_WARNING "Cannot get device majornumber");
    return device_major;

    no_sleep_just_try_again = 0;    /* we're into the big leagues 
                       now, ioctls, putting
                       procceses to sleep ......

    virgin = 0;         /* Just in case you didn't
                   know....... ;)  */
    berio = 0;          /* set to 8 for 16 bit mono */
    canplay = 0;
    sampling_factor = 1;    /* ie; counter = 22Khz /
                   sampling_factor; */
    pit_counter = 54;

    /* allocate room for our 32KB buffer */
    data_buffer = (long) kmalloc(32000, GFP_KERNEL);    /* this is
                               FAILS */

    timer_idt_ptr = setvect(0x20, (long) myhandler);
    old_pit_counter = helper_get_pit_count();
    temp_pit_counter = compensation_count =
    old_pit_counter / pit_counter;


    ("Hopefully, irq0 has been hooked from vector 20 to %p  \n",
    return 0;


void cleanup_module(void)
    setvect(0x20, timer_idt_ptr);

    unregister_chrdev(device_major, "Internal Speaker");

    kfree((void *) data_buffer);

    printk("Phew! Graceful exit!!!!! \n irq0 reset to %p \n",
       (void *) timer_idt_ptr);


INCLUDEDIR = /usr/src/linux/include


VER = $(shell awk -F\" '/REL/ {print $$2}' $(INCLUDEDIR)/linux/version.h)

OBJS = myaudio.o

all:    $(OBJS)

    cc $(CFLAGS) myaudio.c  

    install -c myaudio.o 

    rm -f *~ core