Here’s the situation which I was able to come up with some ideas for at work today:
We have a WCF-based service where we serve our web-based data to the customers in two modes – synchronous and asynchronous. With asynchronous mode, they make requests to the service, the request is queued and they get the queueId back. Then, they need to make a status request to retrieve the results. Internally, we have made a decision that all the requests are served through the queuing mechanism. Additionally, we have an app-server load manager which sends requests to different application servers which put stuff into the queue. The following design issues were discussed today..
- Does a status request still get queued like any other request? – No, we agreed that it doesn’t make sense to do so.
- Operation Status information is currently stored in the database, but needs to be cached for fast service. This presented the issue: Status requests at the load balancer would go to the same app server for it to be processed. The following solutions were considered:
- Add a parameter to the status request that provides what server handled the initial request. – Doesn’t work since the status service is already published and we don’t want to expose these internal details to the customer.
- Have a sticky session that stores the operations centrally – Might work,but complicates the current design.
- Include a table for StatusRepository on the load manager to lookup where the original request was sent – Might work…
- My idea – Poll each application server for the operationID, if not found go to next. This was accepted by the architects, because we have at most a 100 servers and lookup on the operationID is not costly.