Log in

Linux Help Desk [entries|archive|friends|userinfo]

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

(no subject) [Oct. 9th, 2010|04:35 pm]

I recently did some upgrades.

I upgraded my file server to have 3 1.5TB SATAs from 4 750MB IDEs. It's still running Ubuntu Hardy Heron.
I upgraded my workstation to be running Ubuntu Karmic Koala.

In both cases I'm running NFS.

Before the changes I could get sustained file transfers of about 40MB/s To Server or From Server with Large (>1GB) media files. It also didn't matter if I was copying or moving, using nfs or scp.

Now I'm getting about 38MB From Server, 20MB To Server when I'm copying... If I MOVE the file, I get about 9MB To Server, 15 From.

What kills me is if I scp the files, I can get about 40MB to Server, 55MB from server.

Any thoughts or comments?
Link5 comments|Leave a comment

SW RAID problem... [May. 12th, 2010|09:22 pm]

You may remember my ongoing RAID problems. The last one was solved by replacing the disks. This one is a little more odd.

I've got three 1.5T SATA drives configured as a RAID5 using an ext3 partition. Today I had the machine completely crash while deleting a rather large directory. I rebooted, to find the journal needing rebuilt. I thought the machine had crashed, stopped the process, dinked around and noticed that a mdadm showed a drive was missing:
cut to save spaceCollapse )

EDIT: While I've figured out how to get the array to see the newly partitioned drive:

sudo mdadm /dev/md0 --add /dev/sdb1

This seems to have worked as the array now recognizes it, and it is currently recovering the array... The question of what/how the drive was 'missing' to begin with is still open. As is, how I may be able to test that sucker to see if it's actually bad.
LinkLeave a comment

SWRAID... [Apr. 13th, 2010|09:18 pm]

I recently had a RAID5 array that used 4 250GB IDE drives. I set up a NFS share and could transfer from a workstation to and from at about 30-40MB/sec (that's Megabytes/sec). This weekend, I rebuilt the array using 3 1.5TB SATA drives installed into the same machine. I reinstalled the identical OS (ubuntu, hardy), configured the array using mdadm. The syncing took about 30 hours... it seemed a bit long, but I went with it. While it was syncing, I started transferring the data from the backup (the transfers were slow, but again I went with it).

Everything seemed hunky-dory... Today, after bulk of the transfers were finished I started copying over some other large movie files using a newly configured NFS share. The throughput was appalling. about 3-4MB/s. I tried using SCP to copy the file to the IDE boot drive, and I got about 30-40MB/sec. I tried using SCP to copy the file to the share and got marginally better rate of 8-12MB/sec.

Something is clearly not right. Any thoughts? Help?
LinkLeave a comment

ftp mput recursive and/or curl question [Feb. 15th, 2010|03:26 am]

[Current Mood |perplexed]

I was trying to move some stuff up onto my server, and, I thought I'd like to try and move some stuff without using a gui ftp program or my host's online cpanel tools ( (blecch) for once.
Mostly, because I want to be able to script what I'm doing for future use.
Now, I've used gui ftp clients (and even wrote a little tcl one for quick jobs), but the one I wrote
will only move one file at time...I recall not being able to figure out how to send a dirfull, recursively, in fact, when making the little guy.

Now, I know it's possible to move a whole directory at a time, because thousands of existing gui ftp clients do it.
But I don't seem to be succeeding.
First, I can't find an ftp command (this using bash ftp on debian lenny, not using the tcl/ftp I used to write my little thingy) to move a whole directory, recursively.
I can give it a wildcard, and it will load up all the distinct files in a dir, but it won't send another dir within the dir and files therein, recursively, as I want.

I also thought I'd try curl.
my feable attemptsCollapse )

So, my questions are:

Is there a way to send a dir and it's contents, including sub/dirs, recursively, via ftp in command line?
And, if so, what is it?
(I'm not finding that in the man page, and, I did some googling before coming to ask, but
I only found info about recursive mget, and when I tried to apply it to mput, did not achieve the desired result).


Why do my curl efforts not give me the desired result?

One more:
I found wput, which will send whole dirs, and recursively, but I don't see in the man where it authenticates on the server.
I need to login to ftp up (no anonymous on my server, no way).

I did end up sending everything up with gftp for today, incidentally, but, I will be updating
these pages frequently, and would rather be able to script it and do it from the command line,
in fact, nanoblog, to my knowledge, can call a script to publish the darned thing, if I make a suitable
script and put it in the conf, so, yes, I do, very much, want to learn how to accomplish recursive putting of files to my server,
via the command line, for future use.

I'd like to have a script, really, that will send everything up, only overwriting existing files on the remotes server
when the local file has been touched more recently.

Link14 comments|Leave a comment

(no subject) [Jan. 10th, 2010|09:33 pm]

Hardy Heron recovery...

I've been having this oddball X failure problem. I posted about it a couple of months ago, with no joy. After weeks of moderate dinking (the machine would work, but I could only use it remotely... not really a problem as it was a file server). It turns out the entire problem was a failing graphics card. I've replaced the card, and when rebooting, selected an older kernel... just to see what would happen with the new graphics card. This apparently was a problem as now I'm getting some error when I use the normal kernels, as they say the files don't exist.

Here's my question:

Isn't there a recovery option on the live CD? I've booted from the ISO I used to originally install, and I don't see that as an option. I boot into the live CD, and I don't see that as an option either. I've tried installing, and it never seems to recognize that there exists an existing partition...

Is it a question of editing grub?

I'm really lost. I really, really don't want to go for a complete reinstall...
Link4 comments|Leave a comment

sudo for write access to a single file [Jan. 4th, 2010|09:27 am]

Can sudo be used to grant write access to a single file? I've granted sudoer access to a script, but this script writes to a log file. Currently the log file is chmod 644, and sudoers are unable to use the script due to incorrect permissions on the log file. I could chmod 666 to everyone, but that would be problematic.
Link7 comments|Leave a comment

sudoers & mount [Dec. 18th, 2009|09:13 am]

Ok, so I can grant su access to a user/grp for the command /bin/mount and /bin/umount. I'd like to do this, but I don't want to give everyone access to everything in /etc/fstab.

Is there a way to set up a user to have access to mount & umount just for certain predefined mnt points?
Link4 comments|Leave a comment

iptables says port 80 should be open, but nmap shows otherwise [Dec. 11th, 2009|05:08 pm]

I've removed httpd server and would like to forward port 80 and 443 to Tomcat via connectors.

The problem I have is that ports 80 and 443 are currently closed according to nmap:

Interesting ports on my.domain.org (
80/tcp closed http

/etc/sysconfig/iptables suggests otherwise:
[linux]# more /etc/sysconfig/iptables
:RH-Firewall-1-INPUT - [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -p 50 -j ACCEPT
-A RH-Firewall-1-INPUT -p 51 -j ACCEPT
-A RH-Firewall-1-INPUT -p udp --dport 5353 -d -j ACCEPT
-A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8443 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited

I've restarted the firewall, no dice.

If I install apache and start the service, port 80 and 443 are magically available again.

What's going on?
Link7 comments|Leave a comment

Tomcat SSL [Dec. 10th, 2009|09:16 pm]

I'm setting up a standalone instance of Tomcat - after importing my SSL certificate into the keystore, and restarting Tomcat -- how do users access the secure pages? (I'm waiting on the cert to be sent from verisign.. so I can't test right now)

Will secure pages be accessed by entering httpS:// as with the web server? Or will they have to enter http://blah.com:8443 ?
If the latter, is it possible to redirect that traffic to a friendlier URL? (ie, https://blah.com)
Link1 comment|Leave a comment

Disable outbound ssh [Dec. 8th, 2009|06:17 pm]

I've googled for a couple hours.. is there not a way to disable outbound ssh?
Link5 comments|Leave a comment

[ viewing | most recent entries ]
[ go | earlier ]