Kyle R . Burton on Wed, 26 Jun 2002 16:06:18 -0400


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: IPC::Open3 problem...


> If you are using openssh perhaps try -n to prevent ssh from forking into
> background and detaching from terminal (on remote side)..

I'm running an interactive client, with an open3 call like this:

  my @sshArgs = (
    '-2',
    '-o', 'BatchMode=yes',
    '-x',
    '-C',
    'user@host',
    'path/to/interactiveCilent',
  );
  my $pid = open3( $write,r $reader, $error, 'ssh', @sshArgs );

> This sounds like writing to a closed fd.  Because child close it or its
> dead.  Did you get a pid back?

A pid does come back from open3().

> hmmm ... maybe ssh isn't working after startup beacuse you've logged out,
> and server once had a tty, but now does not, and ssh is behaving
> differently.  I would add a few line to you program to fork, close input,
> and detach from is tty and see if behavior is differnet.

I'm not sure what you're getting at.  The server is a single process that
uses IO::Select to multiplex the workers and inbound socket connections.
Forking would significantly complicate things from where they currently
stand.

> > 
> > Has anyone come across anything similar before?
> > 
> not really.  I've done this sort of thing, but not exacty.  My final
> solution didn't keep persisent "workers" spawned by ssh.  I can't recall
> whether I used bi-driectional pipes or not with ssh.  I think I relied
> alot on error codes for feedback.   I would have workers started and 
> reaped by a server that was always running on and took commands from a
> FIFO instead.  I used ssh only to write to that FIFO and queue requests.
> BTW, I make sure if you "workers" will get reaped by ssh.  Otherwise you
> could end up with lots of dead workers. 

The workers typicly don't get recycled, they run for long periods of time.
It's only when a worker dies (rare event) that it needs to be restarted.
More often we add in more workers at run-time, which the server is supposed 
to support without stopping its current workload, as more worker boxes
become available.

> I assume you are being careful with buffering ect ... it doesn't sound
> like a buffering problem, becuase I would expect a deadlock not a SIGPIPE.

I'm using non-blocking IO with IO::Select, and setting autflush on all the
filehandles to prevent buffering/deadlock issues.  Nothing is deadlocking.


The thing that is really baking my noodle is that it works when the server
is started (multiple workers get launched), but then stops working (throws 
the SIGPIPE) after the server is up and running - but it's the same method 
call.  I'm just confused about what could be different between startup time
and run-time.

You know I just thought of something -- the server forks and
backgrounds itself (redirecting STDIN, STDOUT and STDERR) after the
initial bevy of workers has been launched.  That is a definate
difference between the two timeframes.  Could that have anything to do
with it?


Thanks for the reply,
Kyle

-- 

------------------------------------------------------------------------------
Wisdom and Compassion are inseparable.
        -- Christmas Humphreys
mortis@voicenet.com                            http://www.voicenet.com/~mortis
------------------------------------------------------------------------------
**Majordomo list services provided by PANIX <URL:http://www.panix.com>**
**To Unsubscribe, send "unsubscribe phl" to majordomo@lists.pm.org**