Petals ESB Container

"No endpoint(s) matching the target service" when attaching a container

Details

  • Type: Bug Bug
  • Status: Resolved Resolved
  • Priority: Major Major
  • Resolution: Fixed
  • Affects Version/s: 5.0.0
  • Fix Version/s: 5.0.1
  • Component/s: Topology/network
  • Security Level: Public
  • Description:
    Hide

    From a container, I invoke a service located on the same container. The service invocation succeeds.

    When I attach this container to another topology concurrently to service invocations, I get the following error that should not occur as it is a local invocation:

    RMIComponentContext_CLI_moving_new_container_to_an_existing_topology_initial-container-0, iteration #8, [consumer]: Test to invoke a local service on the container 'initial-container-0'
    RMIComponentContext_CLI_moving_new_container_to_an_existing_topology_initial-container-0, iteration #8: Running test 'Invoke a local service on the container 'initial-container-0'' on operation '{http://petals.ow2.org/}hello' with mep 'IN_OUT'
    javax.jbi.messaging.MessagingException: org.ow2.petals.microkernel.api.jbi.messaging.RoutingException: No endpoint(s) matching the target service '{http://petals.ow2.org/}HelloService' for Message Exchange with id 'petals:uid:2b23a290-929b-11e5-a27f-0090f5fbc4a1'
    	at org.ow2.petals.microkernel.jbi.messaging.exchange.DeliveryChannelImpl.sendExchange(DeliveryChannelImpl.java:406)
    	at org.ow2.petals.microkernel.jbi.messaging.exchange.DeliveryChannelImpl.send(DeliveryChannelImpl.java:172)
    	at org.objectweb.petals.tools.rmi.server.remote.implementations.RemoteDeliveryChannelImpl.send(RemoteDeliveryChannelImpl.java:328)
    	at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:606)
    	at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
    	at sun.rmi.transport.Transport$2.run(Transport.java:202)
    	at sun.rmi.transport.Transport$2.run(Transport.java:199)
    	at java.security.AccessController.doPrivileged(Native Method)
    	at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
    	at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
    	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
    	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
    	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
    	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
    	at java.security.AccessController.doPrivileged(Native Method)
    	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    	at java.lang.Thread.run(Thread.java:745)
    	at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:275)
    	at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:252)
    	at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
    	at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:227)
    	at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:179)
    	at com.sun.proxy.$Proxy15.send(Unknown Source)
    	at com.ebmwebsoucing.integration.client.rmi.RMIClient.runConsumerIntegration(RMIClient.java:425)
    	at com.ebmwebsoucing.integration.client.rmi.RMIClient.main(RMIClient.java:259)
    Show
    From a container, I invoke a service located on the same container. The service invocation succeeds. When I attach this container to another topology concurrently to service invocations, I get the following error that should not occur as it is a local invocation:
    RMIComponentContext_CLI_moving_new_container_to_an_existing_topology_initial-container-0, iteration #8, [consumer]: Test to invoke a local service on the container 'initial-container-0'
    RMIComponentContext_CLI_moving_new_container_to_an_existing_topology_initial-container-0, iteration #8: Running test 'Invoke a local service on the container 'initial-container-0'' on operation '{http://petals.ow2.org/}hello' with mep 'IN_OUT'
    javax.jbi.messaging.MessagingException: org.ow2.petals.microkernel.api.jbi.messaging.RoutingException: No endpoint(s) matching the target service '{http://petals.ow2.org/}HelloService' for Message Exchange with id 'petals:uid:2b23a290-929b-11e5-a27f-0090f5fbc4a1'
    	at org.ow2.petals.microkernel.jbi.messaging.exchange.DeliveryChannelImpl.sendExchange(DeliveryChannelImpl.java:406)
    	at org.ow2.petals.microkernel.jbi.messaging.exchange.DeliveryChannelImpl.send(DeliveryChannelImpl.java:172)
    	at org.objectweb.petals.tools.rmi.server.remote.implementations.RemoteDeliveryChannelImpl.send(RemoteDeliveryChannelImpl.java:328)
    	at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:606)
    	at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
    	at sun.rmi.transport.Transport$2.run(Transport.java:202)
    	at sun.rmi.transport.Transport$2.run(Transport.java:199)
    	at java.security.AccessController.doPrivileged(Native Method)
    	at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
    	at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
    	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
    	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
    	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
    	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
    	at java.security.AccessController.doPrivileged(Native Method)
    	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    	at java.lang.Thread.run(Thread.java:745)
    	at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:275)
    	at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:252)
    	at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
    	at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:227)
    	at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:179)
    	at com.sun.proxy.$Proxy15.send(Unknown Source)
    	at com.ebmwebsoucing.integration.client.rmi.RMIClient.runConsumerIntegration(RMIClient.java:425)
    	at com.ebmwebsoucing.integration.client.rmi.RMIClient.main(RMIClient.java:259)
  • Environment:
    -

Issue Links

Activity

Hide
Christophe DENEUX added a comment - Tue, 24 Nov 2015 - 12:19:08 +0100 - edited

When stopping the Fractal components associated to the JBI artefacts, the JBI artefacts are stopped without persisting the state STOPPED, but endpoints are deactivated.
As these Fractal components are stopped before the router, it exists a little moment where we can get the error mentioned in the issue summary.

It has no sens to stop JBI artefacts and their associated Fractal components:

  • the external listeners of JBI components must always accept incoming request during container attachment/detachment
  • just incoming message exchanges at NMR level must be blocked during the attachment/detachment process, for example at router entry point
  • to migrate registry data from a registry to the other one, it is sufficient to make the migration when the router is stopped. That's why PETALSESBCONT-358 has been reopened.

So, the right process to block service invocations is something like:

  1. pause the router:
    1. to wait the end of pending exchanges. We wait that pending exchanges are ended under a given time. When this time expires, the process continues and pending exchanges will probably fail later.
    2. to interrupt all delivery channels waiting message exchanges. Delivery channels will retry to wait to receive new message exchanges until the router is resumed,
    3. caution, the send of synchronous invocations is blocked at transporter level. It should be better to moved that at router level (PETALSESBCONT-368).
  2. stop the Fractal component 'Router'. All incoming calls to the router methods will be blocked at Fractal framework level,
  3. migrate registry data,
  4. start the Fractal component 'Router',
  5. resumed the router.
Show
Christophe DENEUX added a comment - Tue, 24 Nov 2015 - 12:19:08 +0100 - edited When stopping the Fractal components associated to the JBI artefacts, the JBI artefacts are stopped without persisting the state STOPPED, but endpoints are deactivated. As these Fractal components are stopped before the router, it exists a little moment where we can get the error mentioned in the issue summary. It has no sens to stop JBI artefacts and their associated Fractal components:
  • the external listeners of JBI components must always accept incoming request during container attachment/detachment
  • just incoming message exchanges at NMR level must be blocked during the attachment/detachment process, for example at router entry point
  • to migrate registry data from a registry to the other one, it is sufficient to make the migration when the router is stopped. That's why PETALSESBCONT-358 has been reopened.
So, the right process to block service invocations is something like:
  1. pause the router:
    1. to wait the end of pending exchanges. We wait that pending exchanges are ended under a given time. When this time expires, the process continues and pending exchanges will probably fail later.
    2. to interrupt all delivery channels waiting message exchanges. Delivery channels will retry to wait to receive new message exchanges until the router is resumed,
    3. caution, the send of synchronous invocations is blocked at transporter level. It should be better to moved that at router level (PETALSESBCONT-368).
  2. stop the Fractal component 'Router'. All incoming calls to the router methods will be blocked at Fractal framework level,
  3. migrate registry data,
  4. start the Fractal component 'Router',
  5. resumed the router.
Christophe DENEUX made changes - Tue, 24 Nov 2015 - 12:19:08 +0100
Field Original Value New Value
Status New [ 10000 ] Open [ 10002 ]
Priority Major [ 3 ]
Christophe DENEUX made changes - Tue, 24 Nov 2015 - 12:19:14 +0100
Status Open [ 10002 ] In Progress [ 10003 ]
Christophe DENEUX made changes - Tue, 24 Nov 2015 - 12:19:26 +0100
Fix Version/s 5.0.1 [ 10579 ]
Christophe DENEUX made changes - Tue, 24 Nov 2015 - 12:23:34 +0100
Link This issue blocks PETALSESBCONT-358 [ PETALSESBCONT-358 ]
Hide
Christophe DENEUX added a comment - Thu, 26 Nov 2015 - 16:49:23 +0100 - edited

Several commits occurs for this issue:

  1. svn#38745 that apply the algorithm explain in the previous comment, and that improve the integration test with this use-case,
  2. svn#38749 that improve svn#38745,
  3. svn#38750 that apply the workaround of PETALSERMI-19
  4. svn#38755 that introduce few fix and new log traces in the integration test,
  5. svn#38756 that introduce a configuration parameter to define the delay to wait before to force the end of exchanges when pausing the router.

PETALSSERMI-19 was detected during debugging. That's why its workaround was applied with svn#38750. When it will be fixed, the workaround could be removed.

Show
Christophe DENEUX added a comment - Thu, 26 Nov 2015 - 16:49:23 +0100 - edited Several commits occurs for this issue:
  1. svn#38745 that apply the algorithm explain in the previous comment, and that improve the integration test with this use-case,
  2. svn#38749 that improve svn#38745,
  3. svn#38750 that apply the workaround of PETALSERMI-19
  4. svn#38755 that introduce few fix and new log traces in the integration test,
  5. svn#38756 that introduce a configuration parameter to define the delay to wait before to force the end of exchanges when pausing the router.
PETALSSERMI-19 was detected during debugging. That's why its workaround was applied with svn#38750. When it will be fixed, the workaround could be removed.
Christophe DENEUX made changes - Thu, 26 Nov 2015 - 16:49:39 +0100
Link This issue depends on PETALSSERMI-19 [ PETALSSERMI-19 ]
Christophe DENEUX made changes - Fri, 27 Nov 2015 - 10:13:15 +0100
Link This issue depends on PETALSESBCONT-368 [ PETALSESBCONT-368 ]
Christophe DENEUX made changes - Tue, 1 Dec 2015 - 11:23:16 +0100
Description From a container, I invoke a service located on the same container. The service invocation succeeds.

When I attach this container to another topology, I get the following error that should not occur as it is a local invocation:
{code}
RMIComponentContext_CLI_moving_new_container_to_an_existing_topology_initial-container-0, iteration #8, [consumer]: Test to invoke a local service on the container 'initial-container-0'
RMIComponentContext_CLI_moving_new_container_to_an_existing_topology_initial-container-0, iteration #8: Running test 'Invoke a local service on the container 'initial-container-0'' on operation '{http://petals.ow2.org/}hello' with mep 'IN_OUT'
javax.jbi.messaging.MessagingException: org.ow2.petals.microkernel.api.jbi.messaging.RoutingException: No endpoint(s) matching the target service '{http://petals.ow2.org/}HelloService' for Message Exchange with id 'petals:uid:2b23a290-929b-11e5-a27f-0090f5fbc4a1'
at org.ow2.petals.microkernel.jbi.messaging.exchange.DeliveryChannelImpl.sendExchange(DeliveryChannelImpl.java:406)
at org.ow2.petals.microkernel.jbi.messaging.exchange.DeliveryChannelImpl.send(DeliveryChannelImpl.java:172)
at org.objectweb.petals.tools.rmi.server.remote.implementations.RemoteDeliveryChannelImpl.send(RemoteDeliveryChannelImpl.java:328)
at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$2.run(Transport.java:202)
at sun.rmi.transport.Transport$2.run(Transport.java:199)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:275)
at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:252)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:227)
at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:179)
at com.sun.proxy.$Proxy15.send(Unknown Source)
at com.ebmwebsoucing.integration.client.rmi.RMIClient.runConsumerIntegration(RMIClient.java:425)
at com.ebmwebsoucing.integration.client.rmi.RMIClient.main(RMIClient.java:259)
{code}
From a container, I invoke a service located on the same container. The service invocation succeeds.

When I attach this container to another topology *concurrently to service invocations*, I get the following error that should not occur as it is a local invocation:
{code}
RMIComponentContext_CLI_moving_new_container_to_an_existing_topology_initial-container-0, iteration #8, [consumer]: Test to invoke a local service on the container 'initial-container-0'
RMIComponentContext_CLI_moving_new_container_to_an_existing_topology_initial-container-0, iteration #8: Running test 'Invoke a local service on the container 'initial-container-0'' on operation '{http://petals.ow2.org/}hello' with mep 'IN_OUT'
javax.jbi.messaging.MessagingException: org.ow2.petals.microkernel.api.jbi.messaging.RoutingException: No endpoint(s) matching the target service '{http://petals.ow2.org/}HelloService' for Message Exchange with id 'petals:uid:2b23a290-929b-11e5-a27f-0090f5fbc4a1'
at org.ow2.petals.microkernel.jbi.messaging.exchange.DeliveryChannelImpl.sendExchange(DeliveryChannelImpl.java:406)
at org.ow2.petals.microkernel.jbi.messaging.exchange.DeliveryChannelImpl.send(DeliveryChannelImpl.java:172)
at org.objectweb.petals.tools.rmi.server.remote.implementations.RemoteDeliveryChannelImpl.send(RemoteDeliveryChannelImpl.java:328)
at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$2.run(Transport.java:202)
at sun.rmi.transport.Transport$2.run(Transport.java:199)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:275)
at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:252)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:227)
at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:179)
at com.sun.proxy.$Proxy15.send(Unknown Source)
at com.ebmwebsoucing.integration.client.rmi.RMIClient.runConsumerIntegration(RMIClient.java:425)
at com.ebmwebsoucing.integration.client.rmi.RMIClient.main(RMIClient.java:259)
{code}
Hide
Victor NOËL added a comment - Tue, 1 Dec 2015 - 13:01:16 +0100

Can we consider that fixed now that the pausing works as desired?

Show
Victor NOËL added a comment - Tue, 1 Dec 2015 - 13:01:16 +0100 Can we consider that fixed now that the pausing works as desired?
Hide
Christophe DENEUX added a comment - Tue, 1 Dec 2015 - 14:26:07 +0100

Yes, sure.

Show
Christophe DENEUX added a comment - Tue, 1 Dec 2015 - 14:26:07 +0100 Yes, sure.
Hide
Christophe DENEUX added a comment - Tue, 1 Dec 2015 - 14:26:27 +0100

Fixed in trunk

Show
Christophe DENEUX added a comment - Tue, 1 Dec 2015 - 14:26:27 +0100 Fixed in trunk
Christophe DENEUX made changes - Tue, 1 Dec 2015 - 14:26:27 +0100
Status In Progress [ 10003 ] Resolved [ 10004 ]
Resolution Fixed [ 1 ]
Transition Status Change Time Execution Times Last Executer Last Execution Date
New New Open Open
10m 19s
1
Christophe DENEUX
Tue, 24 Nov 2015 - 12:19:08 +0100
Open Open In Progress In Progress
6s
1
Christophe DENEUX
Tue, 24 Nov 2015 - 12:19:14 +0100
In Progress In Progress Resolved Resolved
7d 2h 7m
1
Christophe DENEUX
Tue, 1 Dec 2015 - 14:26:27 +0100



People

Dates

  • Created:
    Tue, 24 Nov 2015 - 12:08:49 +0100
    Updated:
    Tue, 1 Mar 2016 - 10:23:28 +0100
    Resolved:
    Tue, 1 Dec 2015 - 14:26:26 +0100