Petals Distribution

Problems running load tests about Petals ESB in the cloud with Roboconf

Details

  • Type: Bug Bug
  • Status: New New
  • Priority: Blocker Blocker
  • Resolution: Unresolved
  • Affects Version/s: 5.0.0-M1
  • Fix Version/s: 5.0.0-RC-1, 5.4.0
  • Component/s: None
  • Security Level: Public
  • Description:
    Hide

    I'm using Roboconf to deploy a proxy SOAP where:

    • a first container runs the service consumer part,
    • a second container runs the service provider part,
    • external web-service is running into a sample webapp deployed on Tomcat
    • my webservice client is a load test run with SoapUI
    • my second container is configured to be replicated when more than 5 requests are processed concurrently by the BC SOAP

    Several problems occurs:

    • timeouts occur during the startup of the replicated container, and this container seems to be no more able to process request during a given duration,
    • I get few NullPointerException at BC Soap level,
    • and the NIO transporter seems to establish too many connections,
    • ...
    Show
    I'm using Roboconf to deploy a proxy SOAP where:
    • a first container runs the service consumer part,
    • a second container runs the service provider part,
    • external web-service is running into a sample webapp deployed on Tomcat
    • my webservice client is a load test run with SoapUI
    • my second container is configured to be replicated when more than 5 requests are processed concurrently by the BC SOAP
    Several problems occurs:
    • timeouts occur during the startup of the replicated container, and this container seems to be no more able to process request during a given duration,
    • I get few NullPointerException at BC Soap level,
    • and the NIO transporter seems to establish too many connections,
    • ...
  • Environment:
    -
  1. replicated-container.log
    (961 kB)
    Christophe DENEUX
    Wed, 28 Oct 2015 - 12:43:17 +0100

Issue Links

Activity

Hide
Christophe DENEUX added a comment - Thu, 22 Nov 2018 - 09:26:51 +0100

As using Roboconf was a way to reveal mentioned problems, we can have the same using a similar product as Kubernetes.
We can close this issue when all dependent issues will be cosed.

Show
Christophe DENEUX added a comment - Thu, 22 Nov 2018 - 09:26:51 +0100 As using Roboconf was a way to reveal mentioned problems, we can have the same using a similar product as Kubernetes. We can close this issue when all dependent issues will be cosed.
Hide
Pierre Souquet added a comment - Tue, 20 Nov 2018 - 11:13:41 +0100

Roboconf being discontinued, shouldn't this issue be closed ?

Show
Pierre Souquet added a comment - Tue, 20 Nov 2018 - 11:13:41 +0100 Roboconf being discontinued, shouldn't this issue be closed ?
Hide
Christophe DENEUX added a comment - Mon, 19 Sep 2016 - 14:59:26 +0200

The errors about the NIO transporter should be solved with PETALSESBCONT-437 (Reimplement the Remote TCP transporter with Apache Netty)

Show
Christophe DENEUX added a comment - Mon, 19 Sep 2016 - 14:59:26 +0200 The errors about the NIO transporter should be solved with PETALSESBCONT-437 (Reimplement the Remote TCP transporter with Apache Netty)
Hide
Christophe DENEUX added a comment - Mon, 19 Sep 2016 - 14:50:34 +0200 - edited

Performance test was executed 2016/09/19 with following results (with feature autonomic enabled at Roboconf level):

  • no more errors at BC Soap level,
  • a lot of connections occurs at NIO transporter level: 4370 connections established for 65532 requests executed.
  • error occurs when the third container instance was started by Roboconf.

On the third container instance started by Roboconf, we can see:

  • the incoming queues of TCP stack are not read by the NIO transporter
    root@940b209acee6:/var/log/petals-esb/container-prov-node-b097dcc2-fd9f-477f-9850-ac976ae1d559# netstat -a  | grep 7800
    tcp        0      0 *:7800                  *:*                     LISTEN     
    tcp   277137      0 940b209acee6:7800       172.17.0.7:47710        ESTABLISHED
    tcp        1      0 940b209acee6:7800       172.17.0.7:47700        CLOSE_WAIT 
    tcp   130240      0 940b209acee6:7800       172.17.0.7:47692        ESTABLISHED
    tcp    35201      0 940b209acee6:7800       172.17.0.7:47694        CLOSE_WAIT 
    tcp        1      0 940b209acee6:7800       172.17.0.7:47708        CLOSE_WAIT
  • probably due to the following error logged in the general log file of the 3rd container instance:
    container-prov-node-b097dcc2-fd9f-477f-9850-ac976ae1d559 2016/09/19 12:30:48,620 GMT+0000 SEVERE [Petals.Transporter.NioTransportProtocol.NioSelectorAgent] : Thread 'SelectorAgent Thread' threw an uncaug
    ht exception: null
    java.nio.channels.CancelledKeyException
            at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73)
            at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:82)
            at java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelectableChannel.java:201)
            at org.ow2.petals.microkernel.transport.platform.nio.selector.NioSelectorAgent.run(NioSelectorAgent.java:106)
            at java.lang.Thread.run(Thread.java:745)
Show
Christophe DENEUX added a comment - Mon, 19 Sep 2016 - 14:50:34 +0200 - edited Performance test was executed 2016/09/19 with following results (with feature autonomic enabled at Roboconf level):
  • no more errors at BC Soap level,
  • a lot of connections occurs at NIO transporter level: 4370 connections established for 65532 requests executed.
  • error occurs when the third container instance was started by Roboconf.
On the third container instance started by Roboconf, we can see:
  • the incoming queues of TCP stack are not read by the NIO transporter
    root@940b209acee6:/var/log/petals-esb/container-prov-node-b097dcc2-fd9f-477f-9850-ac976ae1d559# netstat -a  | grep 7800
    tcp        0      0 *:7800                  *:*                     LISTEN     
    tcp   277137      0 940b209acee6:7800       172.17.0.7:47710        ESTABLISHED
    tcp        1      0 940b209acee6:7800       172.17.0.7:47700        CLOSE_WAIT 
    tcp   130240      0 940b209acee6:7800       172.17.0.7:47692        ESTABLISHED
    tcp    35201      0 940b209acee6:7800       172.17.0.7:47694        CLOSE_WAIT 
    tcp        1      0 940b209acee6:7800       172.17.0.7:47708        CLOSE_WAIT
  • probably due to the following error logged in the general log file of the 3rd container instance:
    container-prov-node-b097dcc2-fd9f-477f-9850-ac976ae1d559 2016/09/19 12:30:48,620 GMT+0000 SEVERE [Petals.Transporter.NioTransportProtocol.NioSelectorAgent] : Thread 'SelectorAgent Thread' threw an uncaug
    ht exception: null
    java.nio.channels.CancelledKeyException
            at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73)
            at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:82)
            at java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelectableChannel.java:201)
            at org.ow2.petals.microkernel.transport.platform.nio.selector.NioSelectorAgent.run(NioSelectorAgent.java:106)
            at java.lang.Thread.run(Thread.java:745)
Hide
Christophe DENEUX added a comment - Wed, 10 Feb 2016 - 11:44:11 +0100

Status about this issue with latest SNAPSHOT versions (the feature 'autonomic' is not enable at Roboconf level, so no elasticity is available). Only the following error about the NIO transporter appears always, with a lot of NIO transporter connections:

container-bootstrap-node 2016/02/10 10:33:26,233 GMT+0000 SEVERE [Petals.Transporter.NioTransportProtocol.NioSelectorAgent] : Socket Socket[addr=/172.17.0.9,port=7800,localport=39750] : null
java.nio.channels.ClosedChannelException
        at java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelectableChannel.java:194)
        at org.ow2.petals.microkernel.transport.platform.nio.selector.NioSelectorAgent.run(NioSelectorAgent.java:109)
        at java.lang.Thread.run(Thread.java:745)

We must relaunch the test with feature 'autonomic' enable at Roboconf level to have a real status !

Show
Christophe DENEUX added a comment - Wed, 10 Feb 2016 - 11:44:11 +0100 Status about this issue with latest SNAPSHOT versions (the feature 'autonomic' is not enable at Roboconf level, so no elasticity is available). Only the following error about the NIO transporter appears always, with a lot of NIO transporter connections:
container-bootstrap-node 2016/02/10 10:33:26,233 GMT+0000 SEVERE [Petals.Transporter.NioTransportProtocol.NioSelectorAgent] : Socket Socket[addr=/172.17.0.9,port=7800,localport=39750] : null
java.nio.channels.ClosedChannelException
        at java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelectableChannel.java:194)
        at org.ow2.petals.microkernel.transport.platform.nio.selector.NioSelectorAgent.run(NioSelectorAgent.java:109)
        at java.lang.Thread.run(Thread.java:745)
We must relaunch the test with feature 'autonomic' enable at Roboconf level to have a real status !
Hide
Christophe DENEUX added a comment - Fri, 30 Oct 2015 - 15:08:22 +0100 - edited

A new run of the load test shows us a new error, PETALSBCSOAP-172:

container-bootstrap-node 2015/10/30 13:46:22,099 GMT+0000 WARNING [Petals.Container.Components.petals-bc-soap.notifyService] : Error updating probes
org.ow2.petals.probes.api.exceptions.StartDateItemUnknownException: Start date not found in list of start dates: org.ow2.petals.probes.api.probes.KeyedStartDateItem@38fd1bc0, ERROR
        at org.ow2.petals.probes.impl.KeyedResponseTimesSample.endStartDate(KeyedResponseTimesSample.java:186)
        at org.ow2.petals.probes.impl.KeyedResponseTimeProbeImpl.endsExecution(KeyedResponseTimeProbeImpl.java:253)
        at org.ow2.petals.binding.soap.listener.incoming.PetalsReceiver.updateIncomingProbes(PetalsReceiver.java:467)
        at org.ow2.petals.binding.soap.listener.incoming.PetalsReceiver.invokeBusinessLogic(PetalsReceiver.java:131)
        at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:114)
        at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:181)
        at org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(HTTPTransportUtils.java:172)
        at org.apache.axis2.transport.http.AxisServlet.doPost(AxisServlet.java:146)
        at org.ow2.petals.binding.soap.listener.incoming.servlet.SoapServlet.doPost(SoapServlet.java:178)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:808)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
        at org.eclipse.jetty.server.Server.handle(Server.java:499)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
        at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
        at java.lang.Thread.run(Thread.java:745)
Show
Christophe DENEUX added a comment - Fri, 30 Oct 2015 - 15:08:22 +0100 - edited A new run of the load test shows us a new error, PETALSBCSOAP-172:
container-bootstrap-node 2015/10/30 13:46:22,099 GMT+0000 WARNING [Petals.Container.Components.petals-bc-soap.notifyService] : Error updating probes
org.ow2.petals.probes.api.exceptions.StartDateItemUnknownException: Start date not found in list of start dates: org.ow2.petals.probes.api.probes.KeyedStartDateItem@38fd1bc0, ERROR
        at org.ow2.petals.probes.impl.KeyedResponseTimesSample.endStartDate(KeyedResponseTimesSample.java:186)
        at org.ow2.petals.probes.impl.KeyedResponseTimeProbeImpl.endsExecution(KeyedResponseTimeProbeImpl.java:253)
        at org.ow2.petals.binding.soap.listener.incoming.PetalsReceiver.updateIncomingProbes(PetalsReceiver.java:467)
        at org.ow2.petals.binding.soap.listener.incoming.PetalsReceiver.invokeBusinessLogic(PetalsReceiver.java:131)
        at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:114)
        at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:181)
        at org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(HTTPTransportUtils.java:172)
        at org.apache.axis2.transport.http.AxisServlet.doPost(AxisServlet.java:146)
        at org.ow2.petals.binding.soap.listener.incoming.servlet.SoapServlet.doPost(SoapServlet.java:178)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:808)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
        at org.eclipse.jetty.server.Server.handle(Server.java:499)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
        at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
        at java.lang.Thread.run(Thread.java:745)
Hide
Christophe DENEUX added a comment - Thu, 29 Oct 2015 - 17:11:19 +0100 - edited

Too improve performances of the load tests, I disable MONIT traces. So after the load test execution, a new error (PETALSBCSOAP-171) occurs on a container completely started before to launch the load test:

container-node 2015/10/29 15:31:48,882 GMT+0000 WARNING [Petals.Container.Components.petals-bc-soap.edpt-9dbb3290-7e51-11e5-a618-0242ac110036] : Exception on the WS invocation
javax.jbi.messaging.MessagingException: Cannot create or get an Axis service client from the pool
        at org.ow2.petals.binding.soap.SoapComponentContext.borrowServiceClient(SoapComponentContext.java:418)
        at org.ow2.petals.binding.soap.listener.outgoing.SOAPCaller.call(SOAPCaller.java:146)
        at org.ow2.petals.binding.soap.listener.outgoing.JBIListener.onJBIMessage(JBIListener.java:59)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.invokeJBIListener(MessageExchangeProcessor.java:475)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.processAsProvider(MessageExchangeProcessor.java:414)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.process(MessageExchangeProcessor.java:276)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.run(MessageExchangeProcessor.java:200)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Pool not open
        at org.apache.commons.pool.BaseObjectPool.assertOpen(BaseObjectPool.java:140)
        at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1079)
        at org.ow2.petals.binding.soap.SoapComponentContext.borrowServiceClient(SoapComponentContext.java:382)
        ... 9 more
Show
Christophe DENEUX added a comment - Thu, 29 Oct 2015 - 17:11:19 +0100 - edited Too improve performances of the load tests, I disable MONIT traces. So after the load test execution, a new error (PETALSBCSOAP-171) occurs on a container completely started before to launch the load test:
container-node 2015/10/29 15:31:48,882 GMT+0000 WARNING [Petals.Container.Components.petals-bc-soap.edpt-9dbb3290-7e51-11e5-a618-0242ac110036] : Exception on the WS invocation
javax.jbi.messaging.MessagingException: Cannot create or get an Axis service client from the pool
        at org.ow2.petals.binding.soap.SoapComponentContext.borrowServiceClient(SoapComponentContext.java:418)
        at org.ow2.petals.binding.soap.listener.outgoing.SOAPCaller.call(SOAPCaller.java:146)
        at org.ow2.petals.binding.soap.listener.outgoing.JBIListener.onJBIMessage(JBIListener.java:59)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.invokeJBIListener(MessageExchangeProcessor.java:475)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.processAsProvider(MessageExchangeProcessor.java:414)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.process(MessageExchangeProcessor.java:276)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.run(MessageExchangeProcessor.java:200)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Pool not open
        at org.apache.commons.pool.BaseObjectPool.assertOpen(BaseObjectPool.java:140)
        at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1079)
        at org.ow2.petals.binding.soap.SoapComponentContext.borrowServiceClient(SoapComponentContext.java:382)
        ... 9 more
Hide
Christophe DENEUX added a comment - Thu, 29 Oct 2015 - 12:36:37 +0100 - edited

Upgrading versions in the PVC to the SNAPSHOT following Petals ESB 5.0.0-M1, I have better results:

In the same time, I monitored connections of the NIO transporter accepted by container-bootstrap from (Petals ESB containers are not restarted between 2 runs):

Container Nb connection established during the 1st run Nb connection established during the 2nd run
container-1 317 15
autonomic-0 105 9
autonomic-1 35 9
autonomic-2 10 9
autonomic-3
10

After the 2nd run, the number of connections coming from container-bootstrap into:

Container After the 2nd run
container-1 1694
autonomic-0 965
autonomic-1 544
autonomic-2 341
autonomic-3 17

I think there are a lot of connection established, and this has impact on performances

Show
Christophe DENEUX added a comment - Thu, 29 Oct 2015 - 12:36:37 +0100 - edited Upgrading versions in the PVC to the SNAPSHOT following Petals ESB 5.0.0-M1, I have better results: In the same time, I monitored connections of the NIO transporter accepted by container-bootstrap from (Petals ESB containers are not restarted between 2 runs):
Container Nb connection established during the 1st run Nb connection established during the 2nd run
container-1 317 15
autonomic-0 105 9
autonomic-1 35 9
autonomic-2 10 9
autonomic-3
10
After the 2nd run, the number of connections coming from container-bootstrap into:
Container After the 2nd run
container-1 1694
autonomic-0 965
autonomic-1 544
autonomic-2 341
autonomic-3 17
I think there are a lot of connection established, and this has impact on performances
Hide
Christophe DENEUX added a comment - Wed, 28 Oct 2015 - 16:03:13 +0100

But if for any reasons there are no more message acceptor or a remaining one, we can have this strange behaviour.

Show
Christophe DENEUX added a comment - Wed, 28 Oct 2015 - 16:03:13 +0100 But if for any reasons there are no more message acceptor or a remaining one, we can have this strange behaviour.
Hide
Victor NOËL added a comment - Wed, 28 Oct 2015 - 14:21:26 +0100

PETALSCDK-123 has nothing to do with the problem: if there is a starvation of threads, the problem is in the SOAP BC!

Show
Victor NOËL added a comment - Wed, 28 Oct 2015 - 14:21:26 +0100 PETALSCDK-123 has nothing to do with the problem: if there is a starvation of threads, the problem is in the SOAP BC!
Hide
Christophe DENEUX added a comment - Fri, 23 Oct 2015 - 15:44:17 +0200

Seeing the socket states on the container running consume part during the load test we can see something as:

root@36f7381c7209:/var/log/petals-esb/container-bootstrap-node# netstat -an | grep 172.17.0.18:7800 
tcp        0      0 172.17.0.14:44241       172.17.0.18:7800        TIME_WAIT  
tcp        0      0 172.17.0.14:57436       172.17.0.18:7800        TIME_WAIT  
tcp        0      0 172.17.0.14:44245       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:44280       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:57272       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:57437       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:44732       172.17.0.18:7800        TIME_WAIT  
tcp        0      0 172.17.0.14:44240       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:45774       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:44246       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:44291       172.17.0.18:7800        TIME_WAIT  
tcp        0      0 172.17.0.14:55526       172.17.0.18:7800        ESTABLISHED

It seems to me that socket in state 'TIME_WAIT' are linked to the error java.nio.channels.ClosedChannelException previously mentionned. But in such a load test we should have only sockets in state 'ESTABLISHED'. All sockets must be reused.

Show
Christophe DENEUX added a comment - Fri, 23 Oct 2015 - 15:44:17 +0200 Seeing the socket states on the container running consume part during the load test we can see something as:
root@36f7381c7209:/var/log/petals-esb/container-bootstrap-node# netstat -an | grep 172.17.0.18:7800 
tcp        0      0 172.17.0.14:44241       172.17.0.18:7800        TIME_WAIT  
tcp        0      0 172.17.0.14:57436       172.17.0.18:7800        TIME_WAIT  
tcp        0      0 172.17.0.14:44245       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:44280       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:57272       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:57437       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:44732       172.17.0.18:7800        TIME_WAIT  
tcp        0      0 172.17.0.14:44240       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:45774       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:44246       172.17.0.18:7800        ESTABLISHED
tcp        0      0 172.17.0.14:44291       172.17.0.18:7800        TIME_WAIT  
tcp        0      0 172.17.0.14:55526       172.17.0.18:7800        ESTABLISHED
It seems to me that socket in state 'TIME_WAIT' are linked to the error java.nio.channels.ClosedChannelException previously mentionned. But in such a load test we should have only sockets in state 'ESTABLISHED'. All sockets must be reused.
Hide
Christophe DENEUX added a comment - Fri, 23 Oct 2015 - 15:05:13 +0200 - edited

On the container running the consume part we can see in its log file something as:

container-bootstrap-node 2015/10/23 13:01:50,203 GMT+0000 SEVERE [Petals.Transporter.NioTransportProtocol.NioSelectorAgent] : Socket Socket[addr=/172.17.0.18,port=7800,localport=60520] : null
java.nio.channels.ClosedChannelException
        at java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelectableChannel.java:194)
        at org.ow2.petals.microkernel.transport.platform.nio.selector.NioSelectorAgent.run(NioSelectorAgent.java:109)
        at java.lang.Thread.run(Thread.java:745)

And on the associatied flow, on the container running the provider part, we can see following errors:

container-node_0 2015/10/23 13:01:15,901 GMT+0000 MONIT [Petals.Container.Components.petals-bc-soap] : traceCode = 'provideFlowStepBegin', flowInstanceId = '250d5f02-7986-11e5-94ed-0242ac11000e', flowStepId = '250f5ad1-7986-11e5-a72e-0242ac110012', flowStepInterfaceName = '{http://petals.ow2.org/samples/se-bpmn/notifyVacationService}notifyVacation', flowStepServiceName = '{http://petals.ow2.org/samples/se-bpmn/notifyVacationService}notifyVacationService', flowStepOperationName = '{http://petals.ow2.org/samples/se-bpmn/notifyVacationService}newVacationRequest', flowStepEndpointName = 'edpt-23367c20-7986-11e5-a72e-0242ac110012', flowPreviousStepId = '250d5f03-7986-11e5-94ed-0242ac11000e'
container-node_0 2015/10/23 13:01:15,903 GMT+0000 WARNING [Petals.Container.Components.petals-bc-soap] : An error occured during message processing, let's send it back since the exchange was active before.
java.lang.NullPointerException
        at org.ow2.petals.binding.soap.listener.outgoing.SOAPCaller.call(SOAPCaller.java:110)
        at org.ow2.petals.binding.soap.listener.outgoing.JBIListener.onJBIMessage(JBIListener.java:59)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.invokeJBIListener(MessageExchangeProcessor.java:469)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.processAsProvider(MessageExchangeProcessor.java:408)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.process(MessageExchangeProcessor.java:275)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.run(MessageExchangeProcessor.java:199)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
container-node_0 2015/10/23 13:01:15,934 GMT+0000 MONIT [Petals.Container.Components.petals-bc-soap] : traceCode = 'provideFlowStepFailure', flowInstanceId = '250d5f02-7986-11e5-94ed-0242ac11000e', flowStepId = '250f5ad1-7986-11e5-a72e-0242ac110012', failureMessage = 'A unknown error occurs (org.ow2.petals.binding.soap.listener.outgoing.SOAPCaller.call(SOAPCaller.java:110))'

...

container-node_0 2015/10/23 13:01:16,092 GMT+0000 MONIT [Petals.Container.Components.petals-bc-soap] : traceCode = 'provideFlowStepBegin', flowInstanceId = '25286110-7986-11e5-94ed-0242ac11000e', flowStepId = '252c7fc1-7986-11e5-a72e-0242ac110012', flowStepInterfaceName = '{http://petals.ow2.org/samples/se-bpmn/notifyVacationService}notifyVacation', flowStepServiceName = '{http://petals.ow2.org/samples/se-bpmn/notifyVacationService}notifyVacationService', flowStepOperationName = '{http://petals.ow2.org/samples/se-bpmn/notifyVacationService}newVacationRequest', flowStepEndpointName = 'edpt-23367c20-7986-11e5-a72e-0242ac110012', flowPreviousStepId = '25286111-7986-11e5-94ed-0242ac11000e'
container-node_0 2015/10/23 13:01:16,277 GMT+0000 MONIT [Petals.Container.Components.petals-bc-soap.edpt-23367c20-7986-11e5-a72e-0242ac110012] : traceCode = 'provideExtFlowStepBegin', flowInstanceId = '25286110-7986-11e5-94ed-0242ac11000e', flowStepId = '2548ba50-7986-11e5-a72e-0242ac110012', flowPreviousStepId = '252c7fc1-7986-11e5-a72e-0242ac110012', requestedURL = 'http://172.17.0.2:80/samples-SOAP-services/services/notifyVacationService'
container-node_0 2015/10/23 13:01:16,554 GMT+0000 MONIT [Petals.Container.Components.petals-bc-soap.edpt-23367c20-7986-11e5-a72e-0242ac110012] : traceCode = 'provideExtFlowStepEnd', flowInstanceId = '25286110-7986-11e5-94ed-0242ac11000e', flowStepId = '2548ba50-7986-11e5-a72e-0242ac110012'
container-node_0 2015/10/23 13:01:16,563 GMT+0000 WARNING [Petals.Container.Components.petals-bc-soap.edpt-23367c20-7986-11e5-a72e-0242ac110012] : Exception on the WS invocation
javax.jbi.messaging.MessagingException: Can't find the Axis service client's pool: this should never happen! Key: org.ow2.petals.binding.soap.listener.outgoing.ServiceClientKey@5641699b
        at org.ow2.petals.binding.soap.SoapComponentContext.returnServiceClient(SoapComponentContext.java:528)
        at org.ow2.petals.binding.soap.listener.outgoing.SOAPCaller.call(SOAPCaller.java:216)
        at org.ow2.petals.binding.soap.listener.outgoing.JBIListener.onJBIMessage(JBIListener.java:59)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.invokeJBIListener(MessageExchangeProcessor.java:469)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.processAsProvider(MessageExchangeProcessor.java:408)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.process(MessageExchangeProcessor.java:275)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.run(MessageExchangeProcessor.java:199)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
container-node_0 2015/10/23 13:01:16,567 GMT+0000 MONIT [Petals.Container.Components.petals-bc-soap] : traceCode = 'provideFlowStepFailure', flowInstanceId = '25286110-7986-11e5-94ed-0242ac11000e', flowStepId = '252c7fc1-7986-11e5-a72e-0242ac110012', failureMessage = 'Can't find the Axis service client's pool: this should never happen! Key: org.ow2.petals.binding.soap.listener.outgoing.ServiceClientKey@5641699b'
Show
Christophe DENEUX added a comment - Fri, 23 Oct 2015 - 15:05:13 +0200 - edited On the container running the consume part we can see in its log file something as:
container-bootstrap-node 2015/10/23 13:01:50,203 GMT+0000 SEVERE [Petals.Transporter.NioTransportProtocol.NioSelectorAgent] : Socket Socket[addr=/172.17.0.18,port=7800,localport=60520] : null
java.nio.channels.ClosedChannelException
        at java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelectableChannel.java:194)
        at org.ow2.petals.microkernel.transport.platform.nio.selector.NioSelectorAgent.run(NioSelectorAgent.java:109)
        at java.lang.Thread.run(Thread.java:745)
And on the associatied flow, on the container running the provider part, we can see following errors:
container-node_0 2015/10/23 13:01:15,901 GMT+0000 MONIT [Petals.Container.Components.petals-bc-soap] : traceCode = 'provideFlowStepBegin', flowInstanceId = '250d5f02-7986-11e5-94ed-0242ac11000e', flowStepId = '250f5ad1-7986-11e5-a72e-0242ac110012', flowStepInterfaceName = '{http://petals.ow2.org/samples/se-bpmn/notifyVacationService}notifyVacation', flowStepServiceName = '{http://petals.ow2.org/samples/se-bpmn/notifyVacationService}notifyVacationService', flowStepOperationName = '{http://petals.ow2.org/samples/se-bpmn/notifyVacationService}newVacationRequest', flowStepEndpointName = 'edpt-23367c20-7986-11e5-a72e-0242ac110012', flowPreviousStepId = '250d5f03-7986-11e5-94ed-0242ac11000e'
container-node_0 2015/10/23 13:01:15,903 GMT+0000 WARNING [Petals.Container.Components.petals-bc-soap] : An error occured during message processing, let's send it back since the exchange was active before.
java.lang.NullPointerException
        at org.ow2.petals.binding.soap.listener.outgoing.SOAPCaller.call(SOAPCaller.java:110)
        at org.ow2.petals.binding.soap.listener.outgoing.JBIListener.onJBIMessage(JBIListener.java:59)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.invokeJBIListener(MessageExchangeProcessor.java:469)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.processAsProvider(MessageExchangeProcessor.java:408)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.process(MessageExchangeProcessor.java:275)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.run(MessageExchangeProcessor.java:199)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
container-node_0 2015/10/23 13:01:15,934 GMT+0000 MONIT [Petals.Container.Components.petals-bc-soap] : traceCode = 'provideFlowStepFailure', flowInstanceId = '250d5f02-7986-11e5-94ed-0242ac11000e', flowStepId = '250f5ad1-7986-11e5-a72e-0242ac110012', failureMessage = 'A unknown error occurs (org.ow2.petals.binding.soap.listener.outgoing.SOAPCaller.call(SOAPCaller.java:110))'

...

container-node_0 2015/10/23 13:01:16,092 GMT+0000 MONIT [Petals.Container.Components.petals-bc-soap] : traceCode = 'provideFlowStepBegin', flowInstanceId = '25286110-7986-11e5-94ed-0242ac11000e', flowStepId = '252c7fc1-7986-11e5-a72e-0242ac110012', flowStepInterfaceName = '{http://petals.ow2.org/samples/se-bpmn/notifyVacationService}notifyVacation', flowStepServiceName = '{http://petals.ow2.org/samples/se-bpmn/notifyVacationService}notifyVacationService', flowStepOperationName = '{http://petals.ow2.org/samples/se-bpmn/notifyVacationService}newVacationRequest', flowStepEndpointName = 'edpt-23367c20-7986-11e5-a72e-0242ac110012', flowPreviousStepId = '25286111-7986-11e5-94ed-0242ac11000e'
container-node_0 2015/10/23 13:01:16,277 GMT+0000 MONIT [Petals.Container.Components.petals-bc-soap.edpt-23367c20-7986-11e5-a72e-0242ac110012] : traceCode = 'provideExtFlowStepBegin', flowInstanceId = '25286110-7986-11e5-94ed-0242ac11000e', flowStepId = '2548ba50-7986-11e5-a72e-0242ac110012', flowPreviousStepId = '252c7fc1-7986-11e5-a72e-0242ac110012', requestedURL = 'http://172.17.0.2:80/samples-SOAP-services/services/notifyVacationService'
container-node_0 2015/10/23 13:01:16,554 GMT+0000 MONIT [Petals.Container.Components.petals-bc-soap.edpt-23367c20-7986-11e5-a72e-0242ac110012] : traceCode = 'provideExtFlowStepEnd', flowInstanceId = '25286110-7986-11e5-94ed-0242ac11000e', flowStepId = '2548ba50-7986-11e5-a72e-0242ac110012'
container-node_0 2015/10/23 13:01:16,563 GMT+0000 WARNING [Petals.Container.Components.petals-bc-soap.edpt-23367c20-7986-11e5-a72e-0242ac110012] : Exception on the WS invocation
javax.jbi.messaging.MessagingException: Can't find the Axis service client's pool: this should never happen! Key: org.ow2.petals.binding.soap.listener.outgoing.ServiceClientKey@5641699b
        at org.ow2.petals.binding.soap.SoapComponentContext.returnServiceClient(SoapComponentContext.java:528)
        at org.ow2.petals.binding.soap.listener.outgoing.SOAPCaller.call(SOAPCaller.java:216)
        at org.ow2.petals.binding.soap.listener.outgoing.JBIListener.onJBIMessage(JBIListener.java:59)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.invokeJBIListener(MessageExchangeProcessor.java:469)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.processAsProvider(MessageExchangeProcessor.java:408)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.process(MessageExchangeProcessor.java:275)
        at org.ow2.petals.component.framework.process.MessageExchangeProcessor.run(MessageExchangeProcessor.java:199)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
container-node_0 2015/10/23 13:01:16,567 GMT+0000 MONIT [Petals.Container.Components.petals-bc-soap] : traceCode = 'provideFlowStepFailure', flowInstanceId = '25286110-7986-11e5-94ed-0242ac11000e', flowStepId = '252c7fc1-7986-11e5-a72e-0242ac110012', failureMessage = 'Can't find the Axis service client's pool: this should never happen! Key: org.ow2.petals.binding.soap.listener.outgoing.ServiceClientKey@5641699b'
Hide
Christophe DENEUX added a comment - Fri, 23 Oct 2015 - 14:30:27 +0200 - edited

In log of the replicated container, we can see that a transporter NIO connection is established just after the SU deployment:

container-node_0 2015/10/28 11:12:18,881 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : Deploy Service Unit 'su-SOAP-notifyVacationService-provide'
container-node_0 2015/10/28 11:12:24,951 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : New Service Endpoint deployed : {http://petals.ow2.org/samples/se-bpmn/notifyVacationService}notifyV
acationService ->edpt-c22ed980-7d64-11e5-bdb0-0242ac11001f (INTERNAL):roboconf-demo-1/container-node_0/petals-bc-soap
container-node_0 2015/10/28 11:12:24,982 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : Service Unit 'su-SOAP-notifyVacationService-provide' deployed
container-node_0 2015/10/28 11:12:24,989 GMT+0000 INFO [Petals.Transporter.NioTransportProtocol.NioServerAgent] : A connection is accepted: Socket[addr=/172.17.0.15,port=42063,localport=7800]
container-node_0 2015/10/28 11:12:25,000 GMT+0000 INFO [Petals.Transporter.NioTransportProtocol.NioServerAgent] : A connection is accepted: Socket[addr=/172.17.0.15,port=42064,localport=7800]
container-node_0 2015/10/28 11:12:25,001 GMT+0000 INFO [Petals.Transporter.NioTransportProtocol.NioServerAgent] : A connection is accepted: Socket[addr=/172.17.0.15,port=42065,localport=7800]
container-node_0 2015/10/28 11:12:25,005 GMT+0000 INFO [Petals.Transporter.NioTransportProtocol.NioServerAgent] : A connection is accepted: Socket[addr=/172.17.0.15,port=42067,localport=7800]
container-node_0 2015/10/28 11:12:25,008 GMT+0000 INFO [Petals.Transporter.NioTransportProtocol.NioServerAgent] : A connection is accepted: Socket[addr=/172.17.0.15,port=42068,localport=7800]
container-node_0 2015/10/28 11:12:25,062 GMT+0000 INFO [Petals.JBI-Management.DeploymentService] : Service Assembly 'sa-su-SOAP-notifyVacationService-provide' deployed
container-node_0 2015/10/28 11:12:25,232 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : Placeholders reloading.
container-node_0 2015/10/28 11:12:25,233 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : Placeholders reloaded.
container-node_0 2015/10/28 11:12:25,255 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : Init Service Unit 'su-SOAP-notifyVacationService-provide'
container-node_0 2015/10/28 11:12:25,255 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : Start Service Unit 'su-SOAP-notifyVacationService-provide'

and it seems that no more message acceptor is available after few times:

container-node_0 2015/10/28 11:12:26,152 GMT+0000 INFO [Petals.JBI-Management.DeploymentService] : Service Assembly 'sa-su-SOAP-archiveService-provide' started
container-node_0 2015/10/28 11:13:25,088 GMT+0000 WARNING [Petals.Container.Components.petals-bc-soap] : Try #0. No JBI message exchange processor is available in the pool. Wait 254ms before next try.
java.util.NoSuchElementException: Pool exhausted
        at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1110)
        at org.ow2.petals.component.framework.process.JBIProcessorManager.process(JBIProcessorManager.java:415)
        at org.ow2.petals.component.framework.process.MessageExchangeAcceptor.run(MessageExchangeAcceptor.java:134)
container-node_0 2015/10/28 11:13:25,345 GMT+0000 WARNING [Petals.Container.Components.petals-bc-soap] : Try #1. No JBI message exchange processor is available in the pool. Wait 509ms before next try.
java.util.NoSuchElementException: Pool exhausted
        at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1110)
        at org.ow2.petals.component.framework.process.JBIProcessorManager.process(JBIProcessorManager.java:415)
        at org.ow2.petals.component.framework.process.MessageExchangeAcceptor.run(MessageExchangeAcceptor.java:134)
...

These errors should be solved by PETALSCDK-123.

Show
Christophe DENEUX added a comment - Fri, 23 Oct 2015 - 14:30:27 +0200 - edited In log of the replicated container, we can see that a transporter NIO connection is established just after the SU deployment:
container-node_0 2015/10/28 11:12:18,881 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : Deploy Service Unit 'su-SOAP-notifyVacationService-provide'
container-node_0 2015/10/28 11:12:24,951 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : New Service Endpoint deployed : {http://petals.ow2.org/samples/se-bpmn/notifyVacationService}notifyV
acationService ->edpt-c22ed980-7d64-11e5-bdb0-0242ac11001f (INTERNAL):roboconf-demo-1/container-node_0/petals-bc-soap
container-node_0 2015/10/28 11:12:24,982 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : Service Unit 'su-SOAP-notifyVacationService-provide' deployed
container-node_0 2015/10/28 11:12:24,989 GMT+0000 INFO [Petals.Transporter.NioTransportProtocol.NioServerAgent] : A connection is accepted: Socket[addr=/172.17.0.15,port=42063,localport=7800]
container-node_0 2015/10/28 11:12:25,000 GMT+0000 INFO [Petals.Transporter.NioTransportProtocol.NioServerAgent] : A connection is accepted: Socket[addr=/172.17.0.15,port=42064,localport=7800]
container-node_0 2015/10/28 11:12:25,001 GMT+0000 INFO [Petals.Transporter.NioTransportProtocol.NioServerAgent] : A connection is accepted: Socket[addr=/172.17.0.15,port=42065,localport=7800]
container-node_0 2015/10/28 11:12:25,005 GMT+0000 INFO [Petals.Transporter.NioTransportProtocol.NioServerAgent] : A connection is accepted: Socket[addr=/172.17.0.15,port=42067,localport=7800]
container-node_0 2015/10/28 11:12:25,008 GMT+0000 INFO [Petals.Transporter.NioTransportProtocol.NioServerAgent] : A connection is accepted: Socket[addr=/172.17.0.15,port=42068,localport=7800]
container-node_0 2015/10/28 11:12:25,062 GMT+0000 INFO [Petals.JBI-Management.DeploymentService] : Service Assembly 'sa-su-SOAP-notifyVacationService-provide' deployed
container-node_0 2015/10/28 11:12:25,232 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : Placeholders reloading.
container-node_0 2015/10/28 11:12:25,233 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : Placeholders reloaded.
container-node_0 2015/10/28 11:12:25,255 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : Init Service Unit 'su-SOAP-notifyVacationService-provide'
container-node_0 2015/10/28 11:12:25,255 GMT+0000 INFO [Petals.Container.Components.petals-bc-soap] : Start Service Unit 'su-SOAP-notifyVacationService-provide'
and it seems that no more message acceptor is available after few times:
container-node_0 2015/10/28 11:12:26,152 GMT+0000 INFO [Petals.JBI-Management.DeploymentService] : Service Assembly 'sa-su-SOAP-archiveService-provide' started
container-node_0 2015/10/28 11:13:25,088 GMT+0000 WARNING [Petals.Container.Components.petals-bc-soap] : Try #0. No JBI message exchange processor is available in the pool. Wait 254ms before next try.
java.util.NoSuchElementException: Pool exhausted
        at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1110)
        at org.ow2.petals.component.framework.process.JBIProcessorManager.process(JBIProcessorManager.java:415)
        at org.ow2.petals.component.framework.process.MessageExchangeAcceptor.run(MessageExchangeAcceptor.java:134)
container-node_0 2015/10/28 11:13:25,345 GMT+0000 WARNING [Petals.Container.Components.petals-bc-soap] : Try #1. No JBI message exchange processor is available in the pool. Wait 509ms before next try.
java.util.NoSuchElementException: Pool exhausted
        at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1110)
        at org.ow2.petals.component.framework.process.JBIProcessorManager.process(JBIProcessorManager.java:415)
        at org.ow2.petals.component.framework.process.MessageExchangeAcceptor.run(MessageExchangeAcceptor.java:134)
...
These errors should be solved by PETALSCDK-123.

People

Dates

  • Created:
    Fri, 23 Oct 2015 - 14:28:55 +0200
    Updated:
    Mon, 17 Apr 2023 - 12:27:19 +0200