Willy, I have gdb but I dont know how to use it. Could you say me how to invoke haproxy command using it?
I have used the SIGQUIT signal. The result is as follow:
holb001:~/haproxy-1.3.22 # haproxy -f /etc/haproxy/haproxy.cfg -db Available polling systems :
sepoll : pref=400, test result OK epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK
I hope the information above helps you to identify whats going on.
> Date: Sat, 24 Oct 2009 09:53:00 +0200
> From: w#1wt.eu
> To: alexandresumare#hotmail.com
> CC: haproxy#formilux.org
> Subject: Re: HAPROXY in zLinux is presenting Segmentation fault
>
> Hi alexandre,
>
> On Thu, Oct 22, 2009 at 01:52:05PM +0000, alexandre oliveira wrote:
> >
> > Willy, I did what you have suggested.
>
> thanks.
>
> (...)
> > holb001:~/haproxy-1.3.22 # haproxy -vv
> > HA-Proxy version 1.3.22 2009/10/14
> > Copyright 2000-2009 Willy Tarreau <w#1wt.eu>
> >
> > Build options :
> > TARGET = linux26
> > CPU = generic
> > CC = gcc
> > CFLAGS = -O2 -g
> > OPTIONS =
> >
> > Default settings :
> > maxconn = 2000, maxpollevents = 200
> >
> > Available polling systems :
> > sepoll : pref=400, test result OK
> > epoll : pref=300, test result OK
> > poll : pref=200, test result OK
> > select : pref=150, test result OK
> > Total: 4 (4 usable), will use sepoll.
>
> OK pretty much common.
>
> > holb001:~ # uname -a
> > Linux holb001 2.6.16.60-0.37_f594963d-default #1 SMP Mon Mar 23 13:39:48 UTC 2009 s390x s390x s390x GNU/Linux
>
> Less common ;-)
>
> (...)
> > # Ive started haproxy and did a test. The result is as follow:
> > holb001:~/haproxy-1.3.22 # haproxy -f /etc/haproxy/haproxy.cfg -db
> > Available polling systems :
> > sepoll : pref=400, test result OK
> > epoll : pref=300, test result OK
> > poll : pref=200, test result OK
> > select : pref=150, test result OK
> > Total: 4 (4 usable), will use sepoll.
> > Using sepoll() as the polling mechanism.
> > 00000000:uat.accept(0005)=0007 from [192.168.0.10:4047]
> > 00000001:uat.accept(0005)=0009 from [192.168.0.10:4048]
> > 00000002:uat.accept(0005)=000b from [192.168.0.10:4049]
> > 00000003:uat.accept(0005)=000d from [192.168.0.10:4050]
> > 00000004:uat.accept(0005)=000f from [192.168.0.10:4051]
> > 00000001:uat.srvcls[0009:000a]
> > 00000001:uat.clicls[0009:000a]
> > 00000001:uat.closed[0009:000a]
> > 00000000:uat.srvcls[0007:0008]
> > 00000000:uat.clicls[0007:0008]
> > 00000000:uat.closed[0007:0008]
> > Segmentation fault
>
> Pretty fast to die... I really don't like that at all, that makes
> me think about some uninitialized which has a visible effect on
> your arch only.
>
> > Remeber that this server is a zLinux, I mean, it runs under a mainframe.
>
> yes, but that's not an excuse for crashing. Do you have gdb on this
> machine ? Would it be possible then to run haproxy inside gdb and
> check where it dies, and with what variables, pointers, etc... ?
>
> > Suggestions?
>
> Oh yes I'm thinking about something. Could you send your process
> a SIGQUIT while it's waiting for a connection ? This will dump all
> the memory pools, and we'll see if some of them are merged. It is
> possible that some pointers are initialized and never overwritten
> on other archs, but reused on yours due to different structure sizes.
> This happened once already. So just do "killall -QUIT haproxy" and
> send the output. It should look like this :
>
> Dumping pools usage.
> - Pool pipe (16 bytes) : 0 allocated (0 bytes), 0 used, 2 users [SHARED]
> - Pool capture (64 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
> - Pool task (80 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
> - Pool hdr_idx (416 bytes) : 0 allocated (0 bytes), 0 used, 2 users [SHARED]
> - Pool session (816 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
> - Pool requri (1024 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
> - Pool buffer (32864 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
> Total: 7 pools, 0 bytes allocated, 0 used.
>
> Thanks !
> Willy
>
>
<html> <head> <style><!--
<BR><BR>_______________________<BR>Alexandre<BR><BR><BR> <BR>> Date: Sat, 24 Oct 2009 09:53:00 +0200<BR>> From: w@1wt.eu<BR>> To: alexandresumare@hotmail.com<BR>> CC: haproxy@formilux.org<BR>> Subject: Re: HAPROXY in zLinux is presenting Segmentation fault<BR>> <BR>> Hi alexandre,<BR>> <BR>> On Thu, Oct 22, 2009 at 01:52:05PM +0000, alexandre oliveira wrote:<BR>> > <BR>> > Willy, I did what you have suggested.<BR>> <BR>> thanks.<BR>> <BR>> (...)<BR>> > holb001:~/haproxy-1.3.22 # haproxy -vv<BR>> > HA-Proxy version 1.3.22 2009/10/14<BR>> > Copyright 2000-2009 Willy Tarreau <w@1wt.eu><BR>> > <BR>> > Build options :<BR>> > TARGET = linux26<BR>> > CPU = generic<BR>> > CC = gcc<BR>> > CFLAGS = -O2 -g<BR>> > OPTIONS =<BR>> > <BR>> > Default settings :<BR>> > maxconn = 2000, maxpollevents = 200<BR>> > <BR>> > Available polling systems :<BR>> > sepoll : pref=400, test result OK<BR>> > epoll : pref=300, test result OK<BR>> > poll : pref=200, test result OK<BR>> > select : pref=150, test result OK<BR>> > Total: 4 (4 usable), will use sepoll.<BR>> <BR>> OK pretty much common.<BR>> <BR>> > holb001:~ # uname -a<BR>> > Linux holb001 2.6.16.60-0.37_f594963d-default #1 SMP Mon Mar 23 13:39:48 UTC 2009 s390x s390x s390x GNU/Linux<BR>> <BR>> Less common ;-)<BR>> <BR>> (...)<BR>> > # Ive started haproxy and did a test. The result is as follow:<BR>> > holb001:~/haproxy-1.3.22 # haproxy -f /etc/haproxy/haproxy.cfg -db<BR>> > Available polling systems :<BR>> > sepoll : pref=400, test result OK<BR>> > epoll : pref=300, test result OK<BR>> > poll : pref=200, test result OK<BR>> > select : pref=150, test result OK<BR>> > Total: 4 (4 usable), will use sepoll.<BR>> > Using sepoll() as the polling mechanism.<BR>> > 00000000:uat.accept(0005)=0007 from [192.168.0.10:4047]<BR>> > 00000001:uat.accept(0005)=0009 from [192.168.0.10:4048]<BR>> > 00000002:uat.accept(0005)=000b from [192.168.0.10:4049]<BR>> > 00000003:uat.accept(0005)=000d from [192.168.0.10:4050]<BR>> > 00000004:uat.accept(0005)=000f from [192.168.0.10:4051]<BR>> > 00000001:uat.srvcls[0009:000a]<BR>> > 00000001:uat.clicls[0009:000a]<BR>> > 00000001:uat.closed[0009:000a]<BR>> > 00000000:uat.srvcls[0007:0008]<BR>> > 00000000:uat.clicls[0007:0008]<BR>> > 00000000:uat.closed[0007:0008]<BR>> > Segmentation fault<BR>> <BR>> Pretty fast to die... I really don't like that at all, that makes<BR>> me think about some uninitialized which has a visible effect on<BR>> your arch only.<BR>> <BR>> > Remeber that this server is a zLinux, I mean, it runs under a mainframe.<BR>> <BR>> yes, but that's not an excuse for crashing. Do you have gdb on this<BR>> machine ? Would it be possible then to run haproxy inside gdb and<BR>> check where it dies, and with what variables, pointers, etc... ?<BR>> <BR>> > Suggestions?<BR>> <BR>> Oh yes I'm thinking about something. Could you send your process<BR>> a SIGQUIT while it's waiting for a connection ? This will dump all<BR>> the memory pools, and we'll see if some of them are merged. It is<BR>> possible that some pointers are initialized and never overwritten<BR>> on other archs, but reused on yours due to different structure sizes.<BR>> This happened once already. So just do "killall -QUIT haproxy" and<BR>> send the output. It should look like this :<BR>> <BR>> Dumping pools usage.<BR>> - Pool pipe (16 bytes) : 0 allocated (0 bytes), 0 used, 2 users [SHARED]<BR>> - Pool capture (64 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]<BR>> - Pool task (80 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]<BR>> - Pool hdr_idx (416 bytes) : 0 allocated (0 bytes), 0 used, 2 users [SHARED]<BR>> - Pool session (816 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]<BR>> - Pool requri (1024 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]<BR>> - Pool buffer (32864 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]<BR>> Total: 7 pools, 0 bytes allocated, 0 used.<BR>> <BR>> Thanks !<BR>> Willy<BR>> <BR>> <BR> <br /><hr />Windows 7: It helps you do more. <a href='http://www.microsoft.com/Windows/windows-7/default.aspx?ocid=PID24727::T:WLMTAGL:ON:WL:en-US:WWL_WIN_evergreen3:102009' target='_new'>Explore Windows 7.</a></body></html>
This archive was generated by hypermail 2.2.0 : 2009/10/27 16:45 CET