...making Linux just a little more fun!

<-- prev | next -->

The Linux Gazette Mailbag


Mailbag

Apache Server Question
OSS external midi plays to AWE
Proofing C
An idea
Speaking of taking over the world...
Followup: [LG#126] mailbag #2
Followup: [LG#127] mailbag #4
Multiple append= Directives in /etc/lilo.conf
Kmail mystery
How to recursively search HTML?
Cannot read Kingston USB pen drive with DOS and Mac partitions
PHP and Apache
Printer setup
Reading Oxygen Phone Manager files
Network Traffic Review/Filtering
Meaning of overruns & frame in ifconfig output
Search Engine Spiders
Profile has errors
Where can I find GLUE (Group of Linux Users Everywhere)
Graham Jenkins
Request for any ADC's driver source code

Apache Server Question

(marnie.mclaughlin at gmail.com) marnie.mclaughlin at gmail.com
Sat May 13 09:17:24 PDT 2006

Answered by: Ben

[ This one slipped through without an answer - anyone out there have a suggestion for Marnie? -- Kat ]

[Ben] - This is from a friend of mine; my memory of how to do this is rather fuzzy, so I'm hoping that someone here will have a bit more recent Apache experience than I do.

Hi Ben,

I have a Linux question for you if you don't mind:)

My boyfriend is a Linux Admin and wants to display a different directory per user based on _SERVER{PHP_AUTH_USER}

For example:

http://server/home/

should point to http://server/home/user1/ or

should point to http://server/home/user2/

Both directories should appear to be the same (i.e. /home/) home could be a PHP script or anything else that will work:)

What would you suggest or where can I look for help?

Fank you:)

Marnie


OSS external midi plays to AWE

sindi keesan (keesan at sdf.lonestar.org)
Fri May 12 18:33:24 PDT 2006

Answered by:

[ No one has had an answered for this one yet. Maybe there's someone reading this issue of LG who does? -- Kat ]

I checked your knowledge base and found my own posting from 2005. The MIDI howto is for ALSA and I am using 2.2.16 kernel and OSS.

Last year you helped me to play AWE files in linux, using my abbreviated version of Slackware 7.1 (I had compiled playmidi and drvmidi but I needed sfxload and a sound bank). I now know how to play FM synthesis and AWE midi in DOS and linux, after setting the sound card in DOS with ctcm and loading soundbank with sfxload. drvmidi or playmidi -a does AWE, playmidi -4 or -f does FM synthesis, and playmidi -e is supposed to play to the external midi device but it DOES NOT.

I also have the file I was missing that was needed to convert rpm to tgz so I can use precompiled rpm packages (rpm2cpio as used by rpm2targz).

If I have inserted awe_wave.o, playmidi -e plays AWE not external, and if I have not inserted it I get an error message about /dev/sequencer not found and device or resource being busy.

cat /dev/sndstat does not do anything (some error message about the device not existing). cat file.midi > /dev/sequencer says the device does not exist or is busy or something.

I have made /dev/sequencer and /dev/midi00 with symlink to /dev/midi and /dev/dsp and /dev/audio and /dev/music and /dev/sndstat.

In DOS I run ctcm to set the addresses and irqs, after which the pnp AWE cards work properly in DOS, then I boot with loadlin into linux and the settings are retained and I can play to external MIDI with several DOS programs, or AWE or FM synthesis.

Soundblaster AWE64 card (ISA, pnp)

Using a 2.2.16 kernel which I compiled with support for sound:

insmod soundcore
  insmod soundlow
  insmod sound
  insmod v_midi (Do I need this one? Is it causing problems?)
  insmod uart401
  (Sometimes also insmod mpu401, makes no difference)
  insmod sb io=0x220 mpu_io=0x330 irq=5 dma=1 dma16=5
  insmod awe_wave (for AWE sound only)
  sfxload /usr/lib/sfbank/synthgm.sbk (the sound bank)
  insmod opl3 io=0x388 (for FM synthesis only)

Or with a 2.4.31 kernel

  insmod soundcore
  insmod sb_lib
  insmod sb.....

Do I need support for joystick or gameport as well? My kernel has modular support for joystick but I don't have a joystick.

The only info I could find online was requests from other people who had their external midi devices working in Win95 not in linux. With playmidi -e or Rosegarden they either did not play, or played as FM synthesis on a card without AWE. Mine plays as AWE, perhaps because I set that as the playmidi default.

External midi works in DOS with several programs such as playb.exe (playpak) and beatmaster. We have an OEM midi cable and a Yamaha Clavinova with the cable plugged in properly (OUT of cable to IN of piano and vice versa - no sound came through when we plugged it backwards). The joystick port is not disabled on the card in ctcm, and I tried a non-AWE card too with jumper set to joystick enabled.

I also tried the basic (bare.i) kernels from Slackware 7.1 and 10.2 and I think 8.1 (one of them would not even play AWE sound). They all seem to have full sound support, mostly as modules.

What little step might I have left out, or is there a bug in playmidi?

You helped me back in around 2002 to set up a 2MB ramdisk linux with mdacon (to use both TTL and VGA monitors) and tried to teach me how to use screen and suid. I still do not have modprobe, just insmod.


Proofing C

The Answer Gang (tag at linuxgazette.net)
Wed Apr 5 14:03:27 PDT 2006

Answered by: Ben, Jason, Neil

[ This was originally part of the Talkback: 122/sreejith.html discussion. -- Kat ]

[Jason] - [ Aside to the Answer Gang: I am by no means a C expert, but I know a little bit about the language. ISTR some reader volunteering to proofread C code that comes up in future articles, but if not, I could step in and try to catch the obvious technical stuff.]

[[Ben]] - Well, Jan-Benedict Glaw volunteered - but I haven't actually seen any result from it, despite prodding him a couple of times. I guess he's been busy, or something. Your offer is gratefully accepted, since my C is too rusty to be useful except in trivial cases.

[[[Neil]]] - Feel free to CC me for a second opinion on any C/C++ stuff.

[[[[Ben]]]] - That's great, Neil; accepted with thanks. Even with Jason's kind offer, it would still be very nice to have more than one person available to do this.

Side note: anyone who contributes to the proofing process - technical, HTML vetting, whatever - please add your name to that issue's STATUS file as described there. We do not yet have a mechanism for using this, but at some point in the near future, I'm hoping to have a bit of processing added that will display those names on the index page of each issue. Either via my ham-handed efforts, or someone else's Python skills (which would be greatly preferred; when I take an axe to Python, it ain't pretty), but it will happen.


An idea

Benjamin A. Okopnik (ben at linuxgazette.net)
Wed Apr 19 18:23:07 PDT 2006

Answered by: Ben, Pablo, Thomas
[ cc'd to The Answer Gang for comments and further suggestions ]
On Wed, Apr 19, 2006 at 05:49:17PM -0700, Pablo Jadzinsky wrote: 
  > Dear editor, 
  >  
  > I am somewhat newbie to linux and I find your magazine the best  
  > resource I found so far. Even though I have great pleasure reading it  
  > and I find it extremely useful I have a suggestion. 
  > I think it would be great to have a 'homework' forum where every  
  > month a script or task can be suggested and then we (the readers)  
  > work on in during say 2 weeks. Some of us can send solutions to you  
  > and then someone on your team can post some of the different  
  > solutions with perhaps some comments on the different approaches. My  
  > experience says that the only way of really learning linux is through  
  > practice but at the beginning is difficult to use the system altogether. 
  >  
  > Any way, I hope you find my comment appropriate and useful. 
  >  
  > Thanks a lot 
  > Pablo Jadzinsky 

Pablo, that's an excellent idea. I like it a lot; in fact, this is one of the things I tried to do when I wrote the "Learning Perl" series here a while ago. In fact, your idea gives me an idea:

How would you like to write this column?

Before you start objecting due to the fact that you're a "newbie" [1], consider these two facts:

1) You don't have to come up with an answer to them yourself (although I certainly hope that you'll try); posting them to The Answer Gang should get a nice variety of answers which you could collate and use as part of next month's article.

2) I suspect that most of us (it is certainly true for me) have by now forgotten the specific kinds of problems that new users face, and would have a tough time coming up with a list of them that would be relevant. You, on the other hand, have the advantage of your position as a new user, and can simply use any problems that you run into (after considering if they are of general interest, of course.)

Besides... I don't like to mention it, because it's sort of like twisting people's arms... but think of the *fame*. I mean, just imagine - "Pablo Jadzinsky, Linux Gazette columnist." In *bold*, even. Women will swoon, men will grit their teeth in envy, little green creatures from Beta Aurigae will finally make contact with us because we've shown ourselves to be so highly evolved... it's a winning scenario all around. Really. :)

[1] To quote Tom Christiansen, whose opinion, I fully share,

  I find "Newbie" to be a grating, overly-cutsie buzzword 
  (read: obsequious neologism) used by people who are either
  trying to be overly-cutsie or else who don't know that 
  "neophyte", "beginner" or "I'm new to" work just fine. 
  It sounds like something you'd be likely to find 
  in that offensively entitled line of books called 
  "XXX for Dummies".
[[Thomas]] - Been done before via various means --- some people (including myself) tried setting various exercises in articles. But then this is only targetting a specific audience: the people reading that specific article. (Ben's Perl series at least had some reader feedback in this way).

[[[Ben]]] - Speaking of titles - "Thomas Adam, Linux Gazette cynic-in-residence." Think of the fame, the... oh, right. :)

My articles were intended as lessons in basic Perl for those who wanted an easy start. The exercises were purely incidental. I believe that a column of the sort that Pablo suggested would garner an audience over time - it may even get a good jump-start right away.

[[Thomas]] - Sometimes allusions were made in the earlier editions of TAG for readers' to try out ideas if they were so inclined, and do post their results in. Some did, but not many.

It's worth a try, for sure. I am greatly cynical though, so do not be surprised if the response you get from it is low at best. You never know, you might get a 'good' month in it.

[[[Ben]]] - Why not leave off the discouraging grumbling, and see what happens? Thomas, there are no positive outcomes to be served by carping in this case, and lots of negative ones. Why do it?

[[Thomas]] - I assume (since it's your suggestion) you'll be detailing to us in more detail the sorts of things you were meaning, with an example? Yes? Excellent.

[[[Ben]]] - In fact, if Pablo does decide to undertake this, I would rather he didn't detail them. Granting him the same courtesy that we do to any author means leaving the details to him up until the moment that he submits the finished article. There's no reason whatsoever to demand them until that point.

[[[[Thomas]]]] - I wasn't saying that. From what Pablo is saying, I get the impression that he would like us at TAG to critique the answers?

[[[[[Ben]]]]] - Actually, that was my suggestion. I saw these two birds, and I only had the one stone...

[[[[Thomas]]]] - If that's so, then that sounds like a good idea to me (is this positive enough?) I'm just curious, that's all. Heck, it's an interesting discussion.

[[[[[Ben]]]]] - You bet. :) Encouraging participation is a plus in my book.

[[[[[[Pablo]]]]]] - I can't believe I got you two to write this much. If I tell my wife that in top of doing my Ph.D, changing diapers and starting 2 companies I am going to be involve in writing a column she will divorce me right away. I am seriously tempted but let me see what I
can seriously manage and I'll get back to you.

By the way Ben, your intro to scripting in bash is excellent.


Speaking of taking over the world...

Benjamin A. Okopnik (ben at linuxgazette.net)
Tue May 2 06:41:53 PDT 2006

Answered by: Ben, Martin, Suramya, Thomas
"Gee, Brain - what are we going to do tonight?"
  "The same thing we do every night, Pinky - try to take over the world."

So, my nefarious plan is this: let's all write a column (yep, I've definitely caught the bug from the idea proposed by that querent, WhatsIsName, recently.) What I'm talking about is creating a monthly thread in which we walk through a common Linux task - anybody is welcome to toss in the seminal idea and the first part of it (I'll take dibs on 'Installing a printer' this month :), and the rest of us can pitch in comments: different ways to do it, solutions to problems that could come up during the process, hardware/software quirks to watch out for, etc. At the end of each month, I'll format the whole thing into an article - I still need to figure out the layout, perhaps something highly structured with lots of footnotes - and the credit will go to all the members of TAG who participated.

What do you all think?

[Thomas] - I think it's a nice idea -- and something that's worth having a go at. ;)

[[Ben]] - A pleasure to see you positive and first out of the gate, Thomas. :) Very cool indeed.

[Martin] - That would be a interesting idea... Certainly would like to know more about modern day printing...

All I have to do with the distro I use is select a printer and it sets it up. I presume I'm using Cups although I'm not too sure...

[[Ben]] - Well, that is the way it often works - but that's not something from which you can learn a whole lot. However, I'm more interested in the times when it doesn't work that way; I've run into a lot of those over time, and have learned quite a bit out of that. I'm definitely hoping that other people can contribute something from their less-than-perfect experiences as well.

[[[Martin]]] - Luckily every distro I've tried has drivers for my printers but I still would like to find out how it all works without...

[Suramya] - Sounds like a great idea.


Followup: [LG#126] mailbag #2

Jimmy O'Regan (jimregan at o2.ie)
Tue May 2 12:07:27 PDT 2006

Answered by:

Following up to a Mailbag item from issue #126:

GnomeMeeting is now called Ekiga: http://www.ekiga.org .


Followup: [LG#127] mailbag #4

Ganesh Viswanathan (gv1 at cise.ufl.edu)
Thu May 25 19:02:29 PDT 2006

Answered by: Ben, Neil, Thomas

[ This is a followup to the question I have a question about rm command. from LG#127. -- Kat ]

My reply for the:

"I have a question about rm command. Would you please tell me how to remove all the files excepts certain files like anything ended with .c?" question:

Hey,

The simple command (bash)

  rm *[!.c]

would work for deleting all files except the right?

For all directories also, one can use:

 rm -rf *[!.c]

Whatsay?

--Ganesh

[Thomas] - Assuming you had:

  shopt -s extglob

set.

  > For all directories also, one can use: rm -rf *[!.c]

I'd use find, since the glob will just create you unnecessary headaches if the number of files are vast.

[[Ben]] - That would be my solution as well. In fact, since I'm a fan of "belt and suspenders"-type solutions for shell questions of this sort ('rm' is quite ungracious; it does exactly what you tell it to do - I'd have sworn I've heard it chuckle at me...), I'd use 'find' and 'xargs' both.

[Neil] - [Quoting Ganesh]:

   > The simple command (bash)
   > rm *[!.c]
   > would work for deleting all files except the right?

That doesn't look right to me. I would expect that to delete all files, except those ending in '.' or 'c', so it would not delete example.pc or example.

   > For all directories also, one can use:
   > rm -rf *[!.c]

That will probably delete far more than you want. The glob expression is evaluated by the shell in the current directory. The rm command does not see the expression "*[!.c]", it sees whatever list the shell has created from that expression, so if you had a directory containing readme.txt, and subdirectories src and module1, rm will get the command line "rm -rf readme.txt module1" and will delete everything in module1, including files ending in .c. The subdirectory src won't be matched, because, as I said above, it doesn't match anything ending in 'c'.

Here's a little demo:

  neil ~ 07:51:14 529 > mkdir /tmp/test
  neil ~ 07:51:26 530 > cd !$
  cd /tmp/test
  neil test 07:51:31 531 > mkdir src
  neil test 07:51:38 532 > mkdir module1
  neil test 07:51:45 533 > echo -n > readme.txt
  neil test 07:51:52 534 > echo -n > example.pc
  neil test 07:51:58 535 > cp readme.txt example.pc src/
  neil test 07:52:06 536 > cp readme.txt example.pc module1/
  neil test 07:52:09 537 > ls -lR
  .:
  total 8
  -rw-r--r-- 1 neil users 0 May 26 07:51 example.pc
  drwxr-xr-x 2 neil users 4096 May 26 07:52 module1
  -rw-r--r-- 1 neil users 0 May 26 07:51 readme.txt
  drwxr-xr-x 2 neil users 4096 May 26 07:52 src

  ./module1:
  total 0
  -rw-r--r-- 1 neil users 0 May 26 07:52 example.pc
  -rw-r--r-- 1 neil users 0 May 26 07:52 readme.txt
  
  ./src:
  total 0
  -rw-r--r-- 1 neil users 0 May 26 07:52 example.pc
  -rw-r--r-- 1 neil users 0 May 26 07:52 readme.txt
  neil test 07:52:16 538 > rm -rf *[!.c]
  neil test 07:52:27 539 > ls -lR
  .:
  total 4
  -rw-r--r-- 1 neil users 0 May 26 07:51 example.pc
  drwxr-xr-x 2 neil users 4096 May 26 07:52 src
  
  ./src: 
  total 0
  -rw-r--r-- 1 neil users 0 May 26 07:52 example.pc
  -rw-r--r-- 1 neil users 0 May 26 07:52 readme.txt
  neil test 07:52:30 540 > 

Multiple append= Directives in /etc/lilo.conf

moped (moped at sonbeam.org)
Thu Apr 6 10:13:41 PDT 2006

Answered by: Ben

I found a web page with your subject comment, but I couldn't find where you actually explained how to do it. Can you put a per-image append kernel option in lilo.conf? Or does something like that have to be done all on one line, as you suggest?

Could you please give me more info on how to do this?

Thanks!

[Ben] - Sure; here's a copy of the relevant part from my own "lilo.conf":

# Set the default image to boot
# 
default=Linux-new

image=/boot/vmlinuz
        vga=0x317
        label=Linux-new
        append="quiet acpi=on idebus=66 hdc=ide-scsi"
        read-only

image=/boot/vmlinuz.old
        vga=0x317
        label=Linux-old
        append="quiet acpi=on idebus=66"
        read-only

image=/boot/vmlinuz-2.6.8-rc3-bk4
        label=Linux-2.6.8
        vga=0x317
        append="resume=/dev/hda2 quiet acpi=on idebus=66 hdc=ide-scsi"
        read-only

Each image entry gets its own 'append' option. This is also documented in the LILO documentation; under Debian, the '/usr/share/doc/lilo/Manual.txt.gz' file contains several examples as well as an explanation of the option.


Kmail mystery

Neil Youngman (ny at youngman.org.uk)
Fri Apr 28 01:00:48 PDT 2006

Answered by: Neil

[ Neil found his problem immediately, but I thought that the TAG readership would like to see his nicely clueful help request, anyway. -- Kat ]

I noticed this morning that kmail was sending messages via postfix on my machine, not via my ISP's mail server and as a consequence some emails weren't being delivered because it's a dynamically allocated IP. this was unexpected because

a. I thought kmail was set up to deliver via my ISP

b. postfix was set up for local delivery

First off I reconfigured postfix (dpkg-reconfigure postfix) to make sure that it was configured for local deliveries only. This made no difference that I could see.

I then checked my kmail configuration. Under the accounts/sending tab, I saw that none of my sending accounts were marked as the default. That was unexpected, but seemed like a reasonable explanation for the problem. I selected the sending account for myisp and retested. It still sent via my local postfix program.

I took a closer look and that account had no hostname set. I'm starting to wonder how my kmail settings got into this state. I fix it and retest. It still goes via my local postfix! I stop and restart kmail, check the settings and retest. No change.

I am now completely at a loss. I search through all the visible kmail settings and I can't see any other settings that should affect how my email is sent.

My default sending account has the hostname for sending email set to mail.myisp.com. I can telnet to the SMTP port at mail.myisp.com, but kmail insists on sending mail to localhost. Why? at am I missing?

[Neil] - I am mistaken. The last 2 emails did go via my ISP, so somewhere along the line kmail picked up the change and I misread the logs.


How to recursively search HTML?

(m at de-minimis.co.uk) m at de-minimis.co.uk
Wed May 17 20:13:40 PDT 2006

Answered by: Ben, Francis, Thomas

Dear LinuxGazette,

Can you recommend a tool for recursively searching for a a given word starting at a given HTML page?

Google's great when on the web, but when I have HTML on my local machine Google can't get to it.

It should be reasonably easy to script something together using wget, awk (to get rid of anything that's not meant to be seen), and grep, however if there are already nice solutions out there one might as well use those.

Best Wishes, Max

[Thomas] - Do you just mean the content? Open it up in a browser. If it's the actual HTML, use an editor.

[Francis] - If you want to have content served by your web server searchable on an ongoing basis, then the best answer is probably to run a local search engine, such as htdig or swish-e (or, presumably, many others).

If you want content on my web server to be searchable, I'd prefer if you didn't fetch everything without asking me first.

"web server" there is for three things: get at the content initially for indexing; provide an interface to the search utility for searching; and allow the search utility present a link to the original content for retrieving. None of the three requires a web server, but for search engines which expect to be used on a web site you may need to make non-default configurations if you want your primary interface to be file: urls instead of http: ones.

  > It should be reasonably easy to script something together using wget, awk 
  > (to get rid of anything that's not meant to be seen), and grep, however 
  > if there are already nice solutions out there one might as well use those.

If it is to be a one-off or rare search, then "find | xargs grep" on the filesystem should work. The equivalent through the http interface would also work. ("find" becomes "print a list of urls"; that then would go through a url-retriever like wget or curl to spit the content to stdout for grep; if grep matched, you'd want to print the matching url for later consideration.)

In either case, you'll effectively be searching all of the content every time.

You can get your search engine to search all of the content once, and then tell you where to look for specific terms. The larger the volume of content, the sooner it will have been worth indexing it initially.

[Ben] - I suppose it depends on what you mean by "recursively" - for many webpages, enough recursion depth means searching the entire web.

I don't know of anything like that already made up, but as you say, it should be easy enough to script. My first cut would be something like 'wget' piped into 'lynx -dump -nolist' plus 'grep' to filter the output - but YMMV.


Cannot read Kingston USB pen drive with DOS and Mac partitions

Sindi Keesan (keesan at grex.cyberspace.org)
Sun May 14 10:15:13 PDT 2006

Answered by: Thomas

Someone asked for our help repartioning/reformatting (to FAT16) a Kingston DataTraveler Elite 256MB USB 2.0 pen drive which currently contains one DOS partition of about 200MB that can be accessed via a Mac (which was used to delete the files in it), and a 50MB or so Mac partition that cannot be reformatted by the Mac. (Something is greyed out).

The drive was apparently given out by a TCF bank at a conference and one of the partitions is labelled TCF_2005.

usb-storage.o identifies it as Model: TCF_2005 and Type: CD-ROM

I was told that DOS USB drivers also found it as CD-ROM, and Windows XP or 2000 found DOS and Mac partitions. Our DOS drivers did not find it at all on our older computers. We have only USB 1.0 ports.

I attempted to use the Slackware 10.2 bare.i kernel with hfs.o (Mac file system) module, which has no dependencies listed in modules.dep.

I use Slackware-4.0- and uClibc-based Basiclinux 3.40 and Slackware-7.1-based Basiclinux 2.1 with glibc upgraded to 2.2.5.

When I insmod hfs.o:

Unresolved symbol: generic_file_llseek
  generic_commit_write 
  unlock_new_inode 
  generic_read_dir 
  block_read_full_page 
  __out_of_line_bug 
  block_sync_page 
  cont_prepare_write 
  event 
  mark_buffer_dirty 
  block_write_full_page 
  iget4_locked 
  generic_block_bmap

(copied via pencil and paper - how do I save such messages to a file?).

hfsplus.o gave twice as many lines of messages.

I found only four references to the first three lines in google/linux and they were for later kernels.

The drive comes with Windows or OSX security software which I suspect is causing the problem.

I cannot use fdisk without first finding the drive as a device.

Is it possible to repartition this drive with any linux?

Sindi Keesan

[Thomas] - Why insmod? That's not really the best way to go. You almost certainly want to use modprobe so that any dependant modules needed by the one you're trying to load, do so.

  > (copied via pencil and paper - how do I save such messages to a file?).

They'll appear in /var/log/messages.

  > hfsplus.o gave twice as many lines of messages.

I presume with the same content as the above?

  > I found only four references to the first three lines in google/linux 
  > and they were for later kernels. 
  > 
  > The drive comes with Windows or OSX security software which I suspect 
  > is causing the problem. 
  > 
  > I cannot use fdisk without first finding the drive as a device.

Such things will appear as mass storage devices. Usually via hotplug, they'll be assigned one of the /dev/sda* mappings. In fact, hotplug is something that is both useful and a PITA. For kicks, try resetting it, removing your pen drive first, and then reinserting it:

  sudo /etc/init.d/hotplug restart 

(if you don't know what sudo is, or don't have it setup, you'll have to su to root to do the above.)

You should look at the output from 'dmesg' to see what else it says about your device. Something you haven't even bothered to tell us (but that's OK -- everything is about guessing around here) is the kernel version you're using. Guess what mass storage devices use? Go on -- you might even get a gold star out of it. No? Ok, it's SCSI. In the 2.4.X kernels, you'll need some sort of emulation layer to do this -- scsi_mod probably.

If you're in 2.6.X, you don't have to worry about that.

> Is it possible to repartition this drive with any linux?

It is -- but this is 200MB we're talking about here...


PHP and Apache

clarjon1 (clarjon1 at gmail.com)
Fri May 12 12:29:22 PDT 2006

Answered by: Ben, Thomas

Hey gang, hope you can help

I am running Slackware 10.2 with the default apache and php stuff that came with the installation. I am having a few troubles with PHP. I have done what I have read I am supposed to do, the modules are loaded, but when I goto view a PHP doc via the server, I get either a blank page or a Download To box (depending on the browser, and what page it is) Any suggestions? I would really appreciate.

Thanks!

(PS, I can't wait to finish writing my first article to submit to LG! :) )

[Thomas] - [Quoting clarjon1]

 > I have done what I have read I am supposed to do, the modules are

Really? And just what have you read?

You can't have done everything correctly. This symptom is usually the result of Apache not knowing how to deal with that file type. I am going to make an assumption here in that you really have got the PHP modules loaded. Of course, since you don't tell us which version of Apache you're using, it makes things slightly harder. I don't use Apache2 (way overhyped), so for Apache 1.3.X, you have to ensure in your /etc/apache/httpd.conf file, you have something like:

 AddType application/x-httpd-php .php 
 AddType application/x-httpd-php-source .phps

[[Clarjon1]] - Sorry for not specifying what all the versions are, I was at school at the time, and was pretty rushed for time.

[[[Thomas]]] - It happens.

[[Clarjon1]] - I have Apache 1.3.33, and I have those modules loaded in both mod_php and httpd.conf, so I actually get errors about the modules already loaded,

[[[Thomas]]] - This is normal, and quite harmless. Which version of PHP are you using?

[[Clarjon1]] - The AddType lines, I have those added into the httpd.conf file, and the application/x-httpd-php stuff in the mimetypes file as well (without the AddType there, of course)

[[[Thomas]]] - Right.

[[Clarjon1]] - The module is there, I think that it could be that I messed something up somewhere in installation, and it's biting me now. I'm gonna try downloading fresh copies of apache and php later today, and compile from the source when I get home.

[[[Thomas]]] - Possibly overkill. It's probably not the applications per se that are causing your issue, but one of misconfiguration. I would not have said that recompiling anything is going to help, but YMMV.

[[Clarjon1]] - One last question, what sort of errors should I look at in the error/access logs for hints as to what is going wrong? Thanks for taking the time to help, guys. I really appreciate this.

[[[Thomas]]] - There probably won't be any errors, since Apache is doing exactly what you asked it to do (i.e. with a misconfiguration, or no configuration, it will attempt to either display the php file, or give you the option of downloading it). I actually have a multitail window open (via sshfs) that tails both /var/log/apache/{access,errors}.log:

  multitail -I /mnt/var/log/apache/errors.log -I \
  /var/log/apache/access.log

It's quite useful for things like this. :P

[[[[Clarjon1]]]] - Thanks for all your help, but I was able to fix the problem. Turns out I had all the configuration files set up correctly, I was missing some files. I don't know why nothing complained, but oh well. I went through the packages from my other distros I have kicking around, and I found this package: libphp_common432

This package provides the common files to run with different implementations of PHP. You need this package if you install the php standalone package or a webserver with php support (ie: mod_php).

I installed it, and viola! I now have PHP working. I got a little forum set up (after finding mySQL was broken and then had to install postgres, then figuring out how to use postgres and the database and users which can be used via web). I'm glad that's over with.

So all is good for now, with this problem Thanks again for your help!

[Ben] - As Thomas said, it sounds like you don't have the PHP filetypes defined in your 'httpd.conf' - or perhaps the relevant module is missing or not enabled. On Debian, this happens automatically as part of the PHP package installation procedure; presumably, Slackware doesn't do that, so you'll have to do it manually. In my /etc/apache2/apache2.conf, the relevant entries look like this:

  DirectoryIndex index.html index.cgi index.pl index.xhtml index.php
  Include /etc/apache2/mods-enabled/*.load

I'd also ensure that there was a 'php5.conf' and a 'php5.load' in my /etc/apache2/mods-enabled/ directory. For plain Apache (i.e., not
Apache2), the above would be more of a direct reference from 'httpd.conf' and would look something like

  AddType application/x-httpd-php .php
  AddType application/x-httpd-php-source .phps
  
  LoadModule php5_module /usr/lib/apache/1.3/libphp5.so  

(This would, of course, require that the module named above existed at that location.)

Something I've also found helpful in my PHP travails of the past (this wouldn't help in your case, since the problem you're having comes before this level of troubleshooting) is having a file called 'info.php' living somewhere in my web hierarchy and consisting of the following:

  <?php
  phpinfo();
  ?>

Once PHP is up and running, looking at this file will give you lots and lots of info about your PHP installation and its capabilities.


Printer setup

Neil Youngman (ny at youngman.org.uk)
Sat Apr 1 00:01:28 PST 2006

Answered by: Ben, Martin, Neil

I upgraded to the latest SimplyMepis a while back and since then I've been unable to get it to configure my printer. IIRC, when I originally set up Mepis I just installed some extra packages to give it the right drivers and configured it in the CUPS web interface at localhost:631.

The printer is a Xerox XK35C, for which the lex5700 driver is recommended. Searching the debian packages for xk35c, I find foomatic-db and foomatic-filters-ppds seem to have relevant files. Having made sure both are installed, I still don't get the Xerox Workcentre XK35C or the Lexmark 5700 on the list of available printers.

The HOWTOs don't offer anything more than install the correct driver package and configure printer. Does anyone know what I'm missing here?

[Martin] - Have you re-started CUPS?

 /etc/init.d/cups restart 

Sometimes you have to re-start Cups for it to see any new drivers then just select the printer in the web interface...

[[Neil]] - This machine is not usually left on overnight, so it's been rebooted quite a few times since the drivers were installed.

[Ben] - Check to make sure that your system (via hotplug) still detects your printer. Examine the output of 'dmesg' for something like this:

usb 2-1: ep0 maxpacket = 8
usb 2-1: default language 0x0409
usb 2-1: new device strings: Mfr=1, Product=2, SerialNumber=3
usb 2-1: Product: psc 1310 series 
usb 2-1: Manufacturer: hp
usb 2-1: SerialNumber: CN522C5230V4
usb 2-1: uevent
usb 2-1: device is self-powered
usb 2-1: configuration #1 chosen from 1 choice
usb 2-1: adding 2-1:1.0 (config #1, interface 0)
usb 2-1:1.0: uevent
usb 2-1: adding 2-1:1.1 (config #1, interface 1)
usb 2-1:1.1: uevent
usb 2-1: adding 2-1:1.2 (config #1, interface 2)
usb 2-1:1.2: uevent
drivers/usb/core/inode.c: creating file '002'
hub 2-0:1.0: state 7 ports 2 chg 0000 evt 0002
uhci_hcd 0000:00:1d.0: suspend_rh (auto-stop)
hub 3-0:1.0: state 7 ports 2 chg 0000 evt 0000
drivers/usb/core/inode.c: creating file '001'
usblp 2-1:1.1: usb_probe_interface
usblp 2-1:1.1: usb_probe_interface - got id
drivers/usb/core/file.c: looking for a minor, starting at 0
drivers/usb/class/usblp.c: usblp0: USB Bidirectional printer dev 2 if 1 alt 0 proto 2 vid 0x03F0 pid 0x3F11
usbcore: registered new driver usblp
drivers/usb/class/usblp.c: v0.13: USB Printer Device Class driver

Also (assuming that it's a USB printer and you have 'usbfs' configured and mounted), take a look at '/proc/bus/usb/devices'; my printer entry there looks like this:

T:  Bus=02 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#=  3 Spd=12  MxCh= 0
D:  Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs=  1
P:  Vendor=03f0 ProdID=3f11 Rev= 1.00
S:  Manufacturer=hp
S:  Product=psc 1310 series
S:  SerialNumber=CN522C5230V4
C:* #Ifs= 3 Cfg#= 1 Atr=c0 MxPwr=  2mA
I:  If#= 0 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=cc Prot=00 Driver=(none)
E:  Ad=01(O) Atr=02(Bulk) MxPS=  64 Ivl=0ms
E:  Ad=81(I) Atr=02(Bulk) MxPS=  32 Ivl=0ms
E:  Ad=82(I) Atr=03(Int.) MxPS=   8 Ivl=10ms
I:  If#= 1 Alt= 0 #EPs= 3 Cls=07(print) Sub=01 Prot=02 Driver=usblp
E:  Ad=03(O) Atr=02(Bulk) MxPS=  32 Ivl=0ms
E:  Ad=83(I) Atr=02(Bulk) MxPS=  32 Ivl=0ms
E:  Ad=84(I) Atr=03(Int.) MxPS=   8 Ivl=10ms
I:  If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none)
E:  Ad=07(O) Atr=02(Bulk) MxPS=  32 Ivl=0ms
E:  Ad=87(I) Atr=02(Bulk) MxPS=  32 Ivl=0ms
E:  Ad=88(I) Atr=03(Int.) MxPS=   8 Ivl=10ms
I:  If#= 2 Alt= 1 #EPs= 3 Cls=ff(vend.) Sub=d4 Prot=00 Driver=(none)
E:  Ad=07(O) Atr=02(Bulk) MxPS=  32 Ivl=0ms
E:  Ad=87(I) Atr=02(Bulk) MxPS=  32 Ivl=0ms
E:  Ad=88(I) Atr=03(Int.) MxPS=   8 Ivl=10ms

If all the hardware stuff is OK, try printing directly to the device:

cat /etc/group > /dev/usb/lp0

If that works, then the problem is (obviously) CUPS. I've wrestled with that thing myself, in the past, and here's what worked for me:

Check /etc/cups/ppd to make sure that you have the correct PPD file. If you do, and 'cups' still fails to detect your printer, go to http://linuxprinting.org and grab a fresh copy of that PPD file - I've had the distro install a stale version that couldn't handle my (brand new) printer twice now, with two different printers.

[[Neil]] - No it's not USB

[Ben] - There's also a 'HOWTO'-type page at the above URL that walks you through the intricacies of dealing with CUPS and FooMatic stuff that you might find helpful.

[[Neil]] - I've looked at that before, but this time I found enough inspiration to solve it. The PPD file hadn't been installed where CUPS could see it.

[[[Ben]]] - Now that you remind me - yeah, shortly after I installed CUPS a couple of years ago, I had the same problem. The PPD I needed was located somewhere deep inside '/usr/share' or /usr/lib', but CUPS just ignored it. I think it was that FAQ at linuxprinting.org that finally put me onto the fact that it should be in '/etc/cups/ppd' instead.

[[Neil]] - I located the PPD file with

find / -xdev -iname \*xk35\*

and then I copied it to /usr/share/cups/model/

root at 2[~]# 
gunzip /usr/share/ppd/linuxprinting.org-gs-builtin/Xerox/Xerox-WorkCentre_XK35c-lex5700.ppd.gz
root at 2[~]# 
cp /usr/share/ppd/linuxprinting.org-gs-builtin/Xerox/Xerox-WorkCentre_XK35c-lex5700.ppd /usr/share/cups/model/
root at 2[~]# /etc/init.d/cupsys restart
Restarting Common Unix Printing System: cupsd.
root at 2[~]# 

After that I was able to configure it from the web interface.

Easy once I looked in the right place. Thanks for getting me on the right track.

[[[Ben]]] - Glad I could help, Neil!


Reading Oxygen Phone Manager files

Jimmy O'Regan (jimregan at o2.ie)
Thu May 25 08:17:43 PDT 2006

Answered by: Ben, Francis, Jimmy

Every now and then I use a Windows program called "Oxygen Phone Manager" to back up my phone. It's able to read information that none of the Linux programs can, so it solves the problem of how to back up everything on the phone, but uses some strange file formats of its own, which is another problem...

This script converts data from the call registry to the XML format that OPM outputs (.cll files, or [mnt point]/Program Files/Oxygen/OPM2/Data/Phones/[phone IMEI]/CallRegister.dat).

This is probably the least useful information that OPM extracts, but I needed to use it to understand the date format OPM uses, and someone somewhere might find it useful.

[Jimmy] - It turns out that at least one other program uses the same date format, so the date stuff is probably more useful than I'd thought.

Anyway, here's a Python version of the date stuff (I use python as a hex editor :)

[Francis] - Hi there,

I think there are some imperfections in the homebrew date code (which is why there are modules for that sort of thing, of course).

As a verification, 36524 days after 1 Jan 1900 should be 1 Jan 2000 (100 years x 365 days per year plus 24 leap days between those dates).

My date(1) doesn't go back as far as 1 Jan 1900, but 1000 days later is 28 Sep 1902, so I can use that as an independent verification.

   $ date -d "28 Sep 1902 + 35524 days" +%d/%m/%Y
   01/01/2000

   print days_since_1900(36524);
   -2/01/2000

So: homebrew perl is a few days out, and it prints negative numbers.

   > sub split_date
   > {
   [snip]
   > # Number of days since 1900
   > if (eval "require Date::Manip")
   > {
   > import Date::Manip qw(DateCalc Date_Init);
   > 
   > # OPM doesn't seem to use proper leap years
   > $numdays -= 2;
   [snip]
   > }
   [snip]
   > else
   > {
   > # My crappy function, as a last resort
   > days_since_1900 ($numdays);
   > }
   [snip]
   > }
   > 
   > if ($buggy_as_hell)
   > {
   > # 38860.8914930556 should be 23/05/2006 21:23:45
   > split_date(38860.8914930556);

That'll give 23/05/2006 because you subtract 2 within split_date when using Date::Manip, as shown above.

38860 days after 1 Jan 1900 is actually the 25th. It's only worth mentioning because you don't explicitly subtract 2 within days_since_1900.

   > sub days_since_1900
   > {
   > my $numdays = shift;
   > my @mdays = qw(31 28 31 30 31 30 31 31 30 31 30 31);
   > 
   > my $years = int($numdays / 365);

Minor point -- after 366 leap years have passed, that'll be off.

Unlikely to be a major problem.

   > if ($buggy_as_hell) {print "$years years\n";}
   > 
   > $numdays -= ($years * 365);

That's $numdays %= 365, "the number of days gone this year plus the number of leap days since epoch".

   > if (($years % 4) == 0)
   > {
   > $mdays[1] = 29;
   > }

Other minor point -- that's strictly wrong; but for the range of "90 years either side of today" it works, so it's also unlikely to be a major problem.

   > if ($buggy_as_hell) {print "February has $mdays[1] days\n";}
   > 
   > my $leapyears = int ($years / 4);
   > if ($buggy_as_hell) {print "$leapyears leapyears\n";}
   > $leapyears++; # Um... 'cos this doesn't count 1900 as OPM does

Don't know exactly what OPM does, but $leapyears is now "the number of leap days since epoch, plus one, plus one more if this is a leap year but it's not yet Feb 29th" which is probably not a very useful count.

   > $numdays -= $leapyears; 

The big problem is here. $numdays can now be negative, which will lead to odd dates come December -- try 39050 and 39060 and spot the awkwardness.

So: if $numdays is less than 1, decrease $years by 1, increase $numdays by 365 (or 366 if the new $years corresponds to a leap year), and then think about how to handle the "if this is a leap year but it's not yet Feb 29th" thing.

Oh, and maybe subtract the 2 that happens in the Date::Manip case.

It's too much to wrap my head around right now, so I'll just suggest the cheaty

   $numdays -= 1000;
   $date=qx{date -d "28 Sep 1902 + $numdays days" +%d/%m/%Y};

as a shell-out workaround.

And the magic OPM "2"? Maybe it counts 1 Jan 1900 as day 1 not day 0, and maybe it thinks 1900 was a leap year? That could bump the current numbers up by 2 compared to what is expected, perhaps.

The same algorithmic error is in the python implementation.

[Ben] - I didn't have the time to dig into it, but two errors jumped out at me:

   import Date::Manip qw(DateCalc Date_Init);

Ummm... don't do that. Just don't. The reasons are long and complex (see 'perldoc -f import', if you feel like unraveling a tangled string), but that should be 'use Date::Manip qw(DateCalc Date_Init);' instead.

You've also got 'xml_line' being invoked as

   while (<>)
   {
   s/\r\n//;
   my ($type, $order, $number, $foo, $name, @times) = split (/\t/, $_);
   xml_line ($type, $number, $name, @times);
   }  

but declared as

   my ($type, $number, $name, $stimes) = @_;
   my @times = split / /,$stimes;

Given your sample data, you're putting a single time string into '@times', then reading it as a scalar. The easiest fix (well, 'problem prevention' rather than 'fix' - it'll work as it is unless a "\t" gets into the time data) would be to invoke and process it as a scalar:

   my ($type, $order, $number, $foo, $name, $times) = split (/\t/, $_);
   xml_line ($type, $number, $name, $times); 

or, perhaps, by completely eliding those temp vars since they're only used once anyway - which would eliminate the problem as a side effect:

   while (<>)
   {
   s/\r\n//;
   xml_line ( ( split /\t/)[0,2,4,5] );
   }

Network Traffic Review/Filtering

sloopy (sloopy_m at comcast.net)
Thu Apr 6 04:27:56 PDT 2006

Answered by: Francis, Kapil, Suramya

Greets and Salutations,

I have lurked about on the ML for quite a while and wanted to thank all that make this source of info (and at times humor) for their time and knowledge.

The question(s)

I run a 8-10 node network at home through a router (a VIA C3 mobo with fedora core) and would like to have a way of setting up a web page on it that would list URL's being retrieved from the inet, and a nice side option of being able to block certain content for some nodes on the network. would i need to run a proxy (i.e. squid or similar) for this? or would this be over the capabilities of the router machine?

thanks,

sloopy.

[Francis] - Hi there,

I have lurked about on the ML for quite a while

Great -- I'll not explicitly Cc: you since you're on the list already.

Short answer: you do not need to run a proxy, but you should do so. And it should not exceed the capabilities, unless you've other funny stuff going on.

Longer answer follows...

Usually, the hard part about designing a solution is precisely specifying the intention ;-)

that would list URL's being retrieved from the inet

will need to have access to those URLs.

I'm guessing that you primarily care about http, with perhaps an interest in https or ftp too.

Expect that you won't see details of https requests. It's easier that way.

One way to get access to those URLs would be to run a network sniffer on the router and configure it to record HTTP requests.

Completely passive, no change to what the users see or do, and you can prevent the system becoming swamped by allowing it to drop packets if it thinks it is too busy, at the cost of not having a complete record of the accesses. Google will be able to suggest software which can do this, if you don't have any favourites. One variant I've come across (but can't think of right now) involves sniffing and displaying any images in http traffic that are on the network. That may or may not be appropriate for your setup, of course.

[[Suramya]] - The software you are thinking about is called Driftnet .

Be really really careful while running this. We tried this as a test at my last job and were showing off the program to a group of co-workers and I guess someone in the building was surfing porn at that time so we got to display some really explicit content on a really big display.

Consider yourself warned if you want to use this... :)

[Francis] - Another way to get the URL list would be to require the use of a http proxy server, and read the log file it generates. One way to require the use of a proxy server is to allow (web) access from it and block access from elsewhere. Another way is to try network magic on the router and use a transparent proxy. (With the obvious caveat that a transparent proxy can't work in all cases; that may not be an issue for you.)

Small or no changes to what the users see and do. Active, in that if the extra service fails, web access is stopped. And it does allow for the possibility of caching and access control, if you go that way.

In either case, for a web page, "tail the-url-list-file" as an appropriate user might be sufficient, or just show the raw log file, or use any of the log analysers to generate pretty pictures. Again, Google will likely show you a ready-made solution for whichever you pick.

being able to block certain content for some nodes on the network.

When you have defined (for yourself) "certain content" and "some nodes", you'll probably find that it's easier to do this at the http level, with a proxy server. It can be done at the tcp level with a more active network sniffer, but the simple answer is "don't do that".

On my machine (PII 366MHz, 128MB) I use squid as a caching proxy server, with squidGuard as a redirector intended primarily to block some images and undesired content. Single client node, and (deliberately) easily bypassed. (I also have a thttpd to serve the redirected content, for completeness.)

In my setup, the control is handled mostly by squidGuard, which can decide based on source address or url, but not based on content. That's good enough for me. (There's a Fine Manual with all of the details.)

Occasionally I want to make local modifications to the block-or-allow lists, which I do from a shell. A webmin module for it exists if you like that kind of thing, but I've not examined it in a long time.

"My users" is "me", and they all know how to fix or work around broken bits without bothering their admin. This happy state may not match your network.

Extra memory is always good. And disk space if you choose to cache or retain logs. But if the machine can run Fedora Core, it can run squid/squidGuard for a small network.

Set up squid and squidGuard and whatever other bits you want, and use it as your proxy server. Configure squid to allow what you want to allow, and block the rest. Configure squidGuard to allow or block as appropriate. And make sure that "what you want to allow" won't surprise the other users.

Admire your URL list web page, and change it to match what you want.

Decide how important to you it is to block access and have a full log of http requests.

Then either invite others to use the proxy, require others explicitly to use it, or require others transparently to use it (the latter two include adjusting your ip filtering rules).

And when it's all working, if you've something to add over what's in the lg archives, write it up as an article and submit.

Good luck with it!

[[[Kapil]]] - As Suramya pointed out anecdotally, in any (re-)configuration of routers/firewalls make sure you understand and can handle the "politics".

As Francis Daly said you have three solutions. I'll add a glimpse to the politics associated with each.

a. Force all nodes to use a web proxy by blocking other nodes from accessing the web directly (using firewall rules). Any web proxy combined with a log analyzer (analog?) can do what you want.

Provide a ".pac" file (for automatic proxy configuration) for user convenience.

This way everyone using the nodes knows what you are doing and how.

b. Automatically redirect web connections from the nodes to the web proxy by firewall rules. You need a web proxy (like squid) that can handle "transparent proxying".

The users need not be told anything but they'll probably find out!

"Transparent" proxying is generally not quite transparent and in my experience does break a few (very few) sites. Note that web proxies are acounted for by the RFC for HTTP but transparent proxies are not.

c. Use firewall rules to send a copy of all web traffic through a sniffer which can extract the URL's. You can insert firewall rules to block/allow specific IP addresses.

Again the users need not be told anything.

You will not be breaking any network protocols by doing this.

Hope this helps,


Meaning of overruns & frame in ifconfig output

Ramon van Alteren (ramon at vanalteren.nl)
Thu Apr 6 13:41:50 PDT 2006

Answered by: Ben, Francis, Martin, Ramon

Hi All,

Out of pure interest would anyone know the exact definition (or provide me with pointers to said definition) of the fields overrun and frame in the output of ifconfig.

I've been searching google for an answer, but that mostly turns up false positives from all the people over the years that have posted their ifconfig output on the internet.

I've checked wikipedia which has an external link to the following short bit on overruns but nothing on the frame field: "Receiver overruns usually occur when packets come in faster than the kernel can service the last interrupt. "

I'm seeing these (overrun & frame errors) on a NIC in a load-balancer which services just the incoming http-requests (outgoing uses direct routing) and I'm buying new ones tomorrow. I am however still curious what these values actually mean.

Thanx,

Ramon

[Francis] - The source code might :-)

[[Ramon]] - Kept that as a last resort, my C coding & reading skills is at best rusty and at worst non-existant.

[Francis] - But there's a 450 kB 45-page pdf at both

http://www.utica.edu/academic/institutes/ecii/publications/articles/A0472DF7-ADC9-7FDE-C80B5E5B306A85C4.pdf
and http://www.computer-tutorials.org/ebooks/02_summer_art1.pdf

with the heading

International Journal of Digital Evidence
Summer 2002, Volume 1, Issue 2

"Error, Uncertainty, and Loss in Digital Evidence"
Eoghan Casey, MA

which includes on p29

One manufacturer provides the following information about interface errors,
including datagrams lost due to receiver overruns a.k.a.  FIFO overruns (NX Networks,
1997).
    *  packet too long or failed, frame too long: "The interface received
    a packet that is larger than the maximum size of 1518 bytes for an
    Ethernet frame."
    *  CRC error or failed, FCS (Frame Check Sequence) error: "The
    interface received a packet with a CRC error."
    *  Framing error or failed, alignment error: "The interface received
    a packet whose length in bits is not a multiple of eight."
    *  FIFO Overrun: "The Ethernet chipset is unable to store bytes in
    the local packet buffer as fast as they come off the wire."
    *  Collision in packet: "Increments when a packet collides as the
    interface attempts to receive a packet, but the local packet buffer
    is full.  This error indicates that the network has more traffic than
    the interface can handle."
    *  Buffer full warnings: "Increments each time the local packet
    buffer is full."
    *  Packet misses: "The interface attempted to receive a packet,
    but the local packet buffer is full.  This error indicates that the
    network has more traffic than the interface can handle."

[[Ramon]] - Mmm interesting link, more reading material for the stack ;-) Thanks

[Francis] - This is friend-of-a-friend third hand stuff by now, of course, but it certainly sounds reasonable to me, and might give you pointers to some more terms to search for to find something demonstrably authoritative.

For what it's worth,

/proc/net/dev packet framing error overrun

was what I was what I (eventually) asked Google for, before asking for

"NX Networks" ifconfig

which found the two links.

I hope the above isn't useless.

[[Ramon]] - Certainly not, at the very least it will serve to enlight my soul with knowledge, which is a good thing (tm)

FYI: It took me awhile to pinpoint this bottle-neck, most answers google turned up pointed to badly configured or malfunctioning hardware and or pci-busmaster stuff.

This is a well-configured Intel 100Mbit NIC with busmastering enabled which is apparently flooded with http-traffic to the point that it can not generate interrupts fast enough to get the frames out of it's buffer. That's the first time I've ever seen that.

[[[Martin]]] - DOS attack was the first thing that came to mind...

[[[Ben]]] - I've seen NICs from nominally reputable vendors start throwing a large number of errors under a much lower load than what you've described - I tested several identical units and gave up in disgust. The Intel cards that I used to replace them worked great, never a complaint. The one you described is just an amazing sample of the breed; it should be given a retirement dinner, a gold watch, and a full pension.

[[[[Ramon]]]] - It's an Intel NIC

;-)

[[[[Martin]]]] - So what would cause a network card to throw that many errors then?

I'm guessing that its either dodgy drivers or a actual dodgy manufacture...

[[[[[Ben]]]]] - The latter. This happened ~7 years ago, but since I had to send the cards to the States to get them replaced (I was in the USVIs at the time), and everything took place in S-L-O-W T-I-M-E, many of the details are still with me. It all took place under a certain legacy OS, but the drivers that the vendor specified (ne2000) were the bone-standard, came-with-the-OS types. By the time the third card arrived - I had a hard time believing that I got two bad cards, but seeing trainwrecks happening at 2MB/s was a pretty strong motivator to find out - I had installed Debian (dual-boot) on my machine, and could show the NIC falling over and twitching piteously under two different OSes. Then, I marched into the CEOs office with a couple of printouts and demanded a case of Intel NICs. I had no experience with them personally, but I had a number of trustworthy admin friends who swore by these things while swearing off everything else.

Oh yeah - I should mention that the CEO of that company was convinced that the IS department should be able to run on, oh, fifty cents a month - and asking for anything more was an outrage and a shuck that he was way too smart to fall for. This led to some interesting confrontations on a regular basis - I suppose he liked high drama. Anyway, I don't recall if I had to resort to bodily harm and talking about his mama, but I got those cards and spent the week after that installing them on every machine in the company (except the CEOs, he-heh. He had a brand-new Dell for his Minesweeper and Solitaire, and had previously told me that he didn't want me to touch it.)

After The Great Replacement, many of my network troubles disappeared - and as a bonus, so did lots of database problems. Seems that many of the latter were caused by apps establishing a lock for the duration of the transaction + the network flaking out during that transaction. This would cause the lock to persist until that machine was rebooted (!), and nobody else could get access to that DB until it was... this was a Novell 4.01 goodie that fortunately went away when I updated everything to 4.11, months later.

[[[Francis]]] - (quoting Ramon): Kept that as a last resort, my C coding & reading skills is at best rusty and at worst non-existant.

As backup for the "random file on the web" info, /usr/include/linux/netdevice.h on my 2.4-series machine includes in the definition of "struct net_device_stats", in the "detailed rx_errors:" section

       
  unsigned long   rx_frame_errors;        /* recv'd frame alignment error */
  unsigned long   rx_fifo_errors;         /* recv'r fifo overrun          */

which seems consistent with that information. (Similar entries appear in a few other headers there too.)

And egrep'ing some nic drivers for "rx_frame_errors|rx_fifo_errors" reveals notes like "Packet too long" for frame and "Alignment error" for fifo.

So if it's wrong, it appears consistently wrong.

As to the cause of the errors -- fifo overrun, once the nic is configured and negotiated right, might just mean "you have a (seriously) large amount of traffic on that interface". Presumably you've already considered what devices might be generating that traffic, and isolating things on their own network segment. An alternative is that the machine itself is too busy to process the interrupts and data, but you would likely have spotted that sooner.

Frame alignment, on the other hand (again, presuming that it isn't just an artifact of a nearly-full buffer) suggests that some other device is generating a dubious frame. Any consistent pattern in source MAC or IP which can be used to point the finger?

[[[[Ramon]]]] - Nope the reason I couldn't believe the error at first sight is that it is NOT failure behaviour. It simply is legit traffic ;-)

It's the external nic on the load-balancer for the website I work for. It's pulling 9M+ pageviews per day with a 12 hour sustained peak of 500K+ pageviews per hour. All that traffic needs to go through this single 100Mbit interface, in large amounts of small http request packets.

I'm swapping it out for a Gigabit card later today.

We'll probably frame the card as an achievement and hang it somewhere around the office :-D


Search Engine Spiders

bob van der Poel (bvdp at xplornet.com)
Thu Apr 20 16:46:45 PDT 2006

Answered by: Ben, BobV, Francis, Jason, Thomas

Any light on how search engines like google work? I've recently moved my web stuff to the free site provided by my new ISP. For some reason google refuses to list any of my "really good stuff". And, doing some searches I don't think that any other (or very few) pages on this site are being found. My page http://users.xplornet.com/~bvdp has a fairly unique pattern I can test "wynndel broadband xplornet". So far the only hits are back to my OLD web pages which announce a move. BTW, those pages are gone.

I've discussed this with Xplornet and they have come to the conclusion that their site is "sandboxed", but don't know why, etc. And, frankly, I'm not sure they care much either :)

I have tried to "seed" things by filling out the google form (sorry, forget the exact spot right now).

I find the whole issue pretty interesting, especially since in the past when I've created new pages, etc. they have been found by google within hours. Days at the most. These new pages have been up for about a month now and not a hint from google.

[Thomas] - Probably then, the google site is listed in 'robots.txt' on the webserver as not being valid when sending out bots to spider your site (or a subdomain). It's a common practise to ban most harvesting bots, and to allow only a few.

[[BobV]] - I've actually checked that out. The ISP assures me that there are no robots.txt files buggering things up. I should have mentioned that in my original post, I guess.

[Thomas] - Google work by sending out spiders -- programs that essentially use referral-following (URL-hopping if you like) to see what's what. The greater linked your site is, the greater the chance you'll get'spidered'. (I am sure this is explained on google's site. Go and look).

  > searches I don't think that any other (or very few) pages on this site 
  > are being found. My page http://users.xplornet.com/~bvdp has a fairly 
  > unique pattern I can test "wynndel broadband xplornet". So far the only
  > hits are back to my OLD web pages which announce a move. BTW, those 
  > pages are gone.
  

So? Even if your site is hit by google (or some other indexing surveyor) that's still no guarantee your search results will show up. Consier a company such as Google. Do you have any idea just how much data they house? Whenever you goto 'www.google.com', and perform a search -- you're connecting to a different cache of information each time, so the results you get returned for a known search may well differ. And note that "specific" != "likely chance of singular hit". In fact that's quite the opposite. If your site is the only one listed with a specific phrase, and has not been referenced elsewhere on the net for google to pick up, what do you think the chances of it being listed are?

  > I've discussed this with Xplornet and they have come to the conclusion 
  > that their site is "sandboxed", but don't know why, etc. And, frankly,   
  > I'm not sure they care much either :)

To me it's a moot point, and is slightly egotistical to want to see your site on a search engine. If that happens all well and good. Doubtless it might happen eventually.

[[BobV]] - Of course it is moot. Which is why I posted this message. Figured it might of some interest to others. Sorry if you think I'm just being egotistical or whatever.

[Jason] - Quoting BobV, "Any light on how search engines like google work?"

Well, you start with a novel concept and a bunch of really smart people. Then you leverage commodity hardware and open source software to create a superior product with a clean, simple design. You dominate the market, go public, and then you sell out your principles to help China suppress free speech...

...oh, you mean *technically*? No idea. :-)

[Ben] - Well, there's a surefire way to find out if Google knows about you:

ben at Fenrir:~$ google http://users.xplornet.com/~bvdp
Sorry, no information is available for the URL users.xplornet.com/~bvdp

  * If the URL is valid, try visiting that web page by clicking on the following link: users.xplornet.com/~bvdp
  * Find web pages that contain the term "users.xplornet.com/~bvdp"

I guess it doesn't. I'm assuming you went to http://www.google.com/addurl.html to add yourself - yes? And you made sure to put in a list of keywords that are relevant to your site - yes? I've always found this to work just fine for my clients, although it does take up to a couple of weeks (and it's amazing how impatient people can get in the meantime. :)

[[BobV]] - Yes, that's what I did. I also tried to set up a site map but it appears that I need permission to run python on the site for that to work. I can't run anything on the remote site, so that is out.

[[[Francis]]] - All the site map is is a list of links, yes? Probably with some organisation and grouping and background and the like, but fundamentally, a link to every page on one page? (And the hrefs shouldn't start with "http://", and probably shouldn't start with "/" either, for portability.)

Do you have a local copy of the web site content? If so, you can run a local web server configured similarly to the public one, and run the site map generating program against that. (Or use "find" with some extra scripting, but that uses a file system view rather than a web client view of the content.)

That should lead to one or a few html pages being created; upload them to the public site and you're laughing.

[[[[BobV]]]] - Yes, I think you have it pretty much correct. I've not bothered to 'dl the code ... I don't see that it does me much good to know how simple my simple little site is :) But, you idea of running in on the local site and then putting the results on my public site crossed my mind as well. Unless they have some checksums/tests in the code it should work fine.

[Ben] - So, try submitting to a bunch of other engines in the meantime. They all "steal" from each other, AFAIK - and the more places there are on the Web that refer to your site, the higher the chance that Google will run across it sooner.

[[BobV]] - You know, I get to used to using google that I forget that other engines even exist. Just for fun, I tired yahoo and, interestly, it found the site.

So, it probably is just a matter of time.

[[[Ben]]] - This should also provide a bit of reassurance on Google's score: if Yahoo indexed you, then so will they - in their own good time.

2[BobV]- Honestly, this is not a big deal with me. I do contribute some software and ideas which I suppose other folks might be interested in (which is why it's on the web). I'm just trying to understand (at a low level) how these search spiders work. I wonder, with all the computing power that something like google has ... how long would it take to "spider" the whole web? They are already referencing billions of pages and my mind gels thinking of how many links that might involve, let alone keyword combinations.

[[[Ben]]] - Oh, it's fun (and at least somewhat mind-boggling) to think in those terms. I mean, just how much space does the Wayback machine (which takes regular snapshots of just about everything on the Net) have? More interesting yet, when will that kind of capacity - and data-processing capability - become available to us consumer types?

"Say, Bob, do you happen to have the latest copy of the Net? I think I'm a bit behind; just beam me your version if you don't mind..."

4 [BobV] - Did I read somewhere, recently, that someone was marketing a disc (HD?) with "the net" on it. Well, a very small subset of the net :) I think this was for folks who didn't have broadband or something. Seemed like a silly idea to me.

So, which is growing faster ... consumer storage or the size of the net? I'd bet on the net! Mind you if we take all the porn, music, duplicate pages and warez software off the net ... well, gosh, then there would be nearly nothing left.

[[[Francis]]] - Do you have access to the web server logs? Ideally the raw logs, which you can play with yourself; but even the output of one of the analysers may be enough.

[[[[BobV]]]] - Unfortunately, no. Just a dumb site from this end.

[[[Francis]]] - If you can see the HTTP_USER_AGENT header in the logs, you'll have an idea of when the search engine spiders visit. Apart from the "normal" clients, almost all of which pretend to be some version of Mozilla, the spiders tend to have reasonably identifying names.

[[[[BobV]]]] - Interesting. No secrets on the web :)

[[[Francis]]] - If you've ever seen "googlebot" (or however it is spelled) visit, you can expect that they are aware of your content[*]. And if you've never seen "yahoobot" visit (again, I'm making up the user-agent string here -- when you see it, you'll know it) you can wonder about how they managed to index your content.

[*] Unless it's someone playing games, using that sort of identifier in their own web client. All of the HTTP headers are an indication only.

Whether the logs are available is up to the hosting company. And if they are available, whether they include HTTP_USER_AGENT or HTTP_REFERER or any other specific headers is also up to them.

But if you can get the logs, you can spend hours playing with them and finding trends and counting visitors and seeing why "unique visitors" is a fundamentally unsound concept. But there are almost certainly more enjoyable ways of spending your time :-)

Good luck

[[BobV]] - Impatient ... when you say "weeks". Gosh, Ben, these days anything more than a minute or two is an eternity :)

[[[Ben]]] - [laugh] Well, there is that. Most people have been trained to expect instant gratification these days (athough many claim that instant is too slow in the Internet Age), and a couple of weeks might as well be a lifetime. Although I understand that a baby still takes nine months, no matter how hard you work at it. It's all so confusing...

Quoting from the classic "Story of Mel",

Mel loved the RPC-4000 because he could optimize his code: that is, locate instructions on the drum so that just as one finished its job, the next would be just arriving at the "read head" and available for immediate execution.

I suspect that your previous experiences happened when your part of IP space (or whatever other system Google uses to sequence their spiders) was just about to "arrive at the read head". This is, I assure you, not the usual case. :)

 


Profile has errors

K1 (research.03 at gmail.com)
Thu Apr 13 16:21:06 PDT 2006

Answered by: Thomas

Hi:

I am new to IPCOP and am learning it while setting it up, I wonder if you can point me to asomeone who can help me.

Your article was very informative, but my issue is with DSL.

I have Verizon DSL and am using a Westell Versalink 327W. I am utilizing the red and green interfaces.

My IPCOP box is handing out addresses as I have DHCP enabled, but I get time out errors when I try to ping it . When I access it from a browser on the green interface I get "Current profile-Non" and "the profile has errors".

When I view the log on my Westell I see my ipcop box. Help what am I doing wrong?

[Thomas] - What are these 'red and green interfaces'?

What about using ''traceroute'' instead? What's your network topology like? Are you using a gateway (perhaps some form of NAT?) Do you handle your own DNS? You need to be much more thorough in the information you detail to us.

Sounds like you need to read the IPCop userguide, or somesuch then.

You're not telling us what's in the logs, for starters. We're not mind readers -- and knowing what's in this file, might just help provide pointers as to its solution.


Where can I find GLUE (Group of Linux Users Everywhere)

Shlomi Fish (shlomif at iglu.org.il)
Sat May 6 19:56:15 PDT 2006

Answered by: Ben, Rick, Shlomi

Hi!

I am the dmoz.org editor of the Linux Users' Group category. Recently, GLUE (Group of Linux Users Everywhere) has disappeared from the Net. What is its status and when will it be restored? Does it have a new URL?

Regards,

Shlomi Fish

[Ben] - Well - hi, Shlomi! Pleasure to hear from you.

(Shlomi Fish is a long-time contributor to the Perl community; lately, for example, I've enjoyed his contributions on the "Perl Quiz-Of-The-Week" list - so it's a pleasure to be able to give what help I can in return, little though it may be in this case.)

A quick search of the Net seems to indicate that GLUE was being hosted by our former webhost, SSC with whom we parted ways quite a while back.

[[Shlomi]] - I see.

[Ben] - According to the Wayback Machine, though, whatever arrangement they had never amounted to much:

http://web.archive.org/web/*/http://www.ssc.com/glue/groups/

My best suggestion for anyone looking for a LUG is to take a look at the { http://www.linux.org/users/[LUG Registry]} - but I'd imagine that you already know that. As to GLUE, my best guess is that they're gone.

[[Shlomi]] - I see. Well, I'll try to find out at SSC.

[Rick] - Hi, Shlomi. Ma nishma? ;->

[[Shlomi]] - Hi. Ani Beseder.

[Rick] - GLUE seems to have been completely eliminated in the most recent Web site reorganisation at SSC. When that happened, because I reference it in the Linux User Group HOWTO, I looked in vain for any sort of explanation, so I suppose none will be forthcoming.

[[Shlomi]] - OK. Thanks. I'll try to find out more at SSC.


Graham Jenkins

Derek (derek.davies at telco4u.net)
Sun Apr 30 11:46:14 PDT 2006

Answered by: Ben, Thomas

Dear Gang, Can you help me ?,my old school friend Graham Jenkins moved to Australia many years ago ,is he your Graham Jenkins?.

My friend was born in Hereford England on the third of April 1940,attended Lord Scudamores boys school has three sisters.

This may be a shot in the dark,but I would like to see the old fellow again before I pop my clogs.

I have been trying to find him for quite a while.

hope you can help

Derek Davies

[Thomas] - You could always look at his author page, and email him. IIRC he might be claiming to be from Australia.

[[Ben]] - That is indeed what his bio says; "a Unix Specialist at IBM Global Services, Australia... lives in Melbourne".

http://linuxgazette.net/authors/jenkins.graham.html

  > My friend was born in Hereford England on the third of April 1940,attended
  > Lord Scudamores boys school has three sisters.

[[Ben]] - The picture in his bio looks a bit younger than that, but it's plausible.

[Thomas] - Scudamores' school, eh? I know of it through reputation.

  > This may be a shot in the dark,but I would like to see the old fellow again
  > before I pop my clogs.

[Thomas] - That sounds like an awfully odd thing to say.

[[Ben]] - Google shows 981 usages. :)

[Thomas] - I must admit that your question is singularly unique. ;) I don't think Jenkins is a member of TAG. If you'll permit us to do so, we'll add this to the Gazette Matters section of LG.

[[Ben]] - I note (again, via Google) that Graham is quite active in the Perl community and elsewhere. Perhaps emailing him at the address on his page would be the most efficient way of getting hold of him.


Request for any ADC's driver source code

osas e (osas53 at yahoo.com)
Sat May 20 10:43:13 PDT 2006

Answered by: Thomas

Hello,

I am a student doing a project and want to interface an ADC0804 to a PC’s parallel port to read the humidity value of the atmosphere. Knowing that there are professionals like you out there, I am writing to request a driver source code to read from this or any other ADC using a PERL Program. It is my belief that you will be kind enough to help a newbie like me grow in computer interfacing.

Kind Regards

Ediae Osagie

[Thomas] - Tell us more about this project. Does your project also stipulate that you yourself have to work out how to interface an ADC0804 to the paralell port?

As a "professional" (and a student) it is usually a given that a student does their homework beforehand. How is it you heard of us, anyway? You can't have read the Linux Gazette, since you'd know that we don't do others' homework. You're no different from this.

Having done some electronics in the past (these are just pointers for you to consider), interfacing to the paralell port is relatively easy -- assuming you do so in an appropriate language. C for instance has many different routines to achieve this, and in fact, any simple I/O system can read and write to /dev/lp* as is necessary. Try it. That's the only "driver" you'll really need.

Enjoy your homework -- we won't help you do it.

Talkback: Discuss this article with The Answer Gang

Copyright © 2006, . Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 128 of Linux Gazette, July 2006

<-- prev | next -->
Tux