|<--next stage||^--Soigan--^||even more-->|
Things are looking pretty good so far. I've implemented service.run(), service.add(), service.hosts(), service.services(), service.params() and service.register(), and they all work well. The service.register() function takes advantage of the new register system I put together, similar to the Listener interface that some APIs use for event-handling. In this case, anyone interested in incoming Responses can register themselves to be told. A logging class (to console) has been set up on the Server to watch the data come through for testing (it'll be changed to a file logger or syslog logger later). The one-time interest from service.run() is also set up here. Another class calls a Client with Results that it has registered for.
What's remaining on the XML-RPC side are service.query() on the Server side, and service.result() on the Client side. The latter is easy enough -- it's nearly the same code as plugin.response in the Server, but the service.query() depends on the Server storing the Responses. I haven't decided the best way to do that. Some language- dependent structure? A MySQL database? Hmn. I think I'll go get the Client-side XML-RPC written first. I also realized that Clients might need to get to a Service's schema, so I should add that too. This allows a Client to connect to a Server without any idea of what's available, and through a handful of calls it can determine what info is out there, what hosts can provide it, and how to modify those requests.
Another thing that got changed was the return value of service.run(), which originally returned a struct. It did so because I close-mindedly thought that all it would call is specific Services, such as who@host or last@host. This is nice and all, but one of the main reasons for this project was to write a replacement for our showproc program, which needs to evaluate the Services ps@*. To do so, it needs to be able to retrieve multiple Results, which requires an array, not a struct. In the case that a non-multicast Service is used, the array will just have the one entry.
Otherwise, things are looking pretty good. The changeover to using a Service class went pretty well, and most of the functions seem to be working nicely. I mentioned above that service.query() depended on the Server keeping Results around. The first implementation, instead of using a database, uses a circular buffer. I didn't find one in the standard Java classes, so I rolled my own, and it works well enough. The size is settable (and currently static -- I should move that to the configuration file), and service.query() works as expected. Well, almost; I haven't implemented any of the controls for that function (number and age of Results), but it currently retrieves all Results in the circular buffer that match the Service provided.
Oh yeah, did I mention I added in the multicasting support? After a bit of headscratching with configuring the switches, it's working very well.
For the most part, this functionality could still be used in the case of wildcard Services -- the Query is still between one Client and one Server, and each single Worker would send its Response to a single Server. But what about the actual Request? Yes, we could support wildcards - provided we knew our list of Workers - by having the Server make a separate Request to each Worker. That requires the aforementioned knowledge (of all of our Workers), which we may or may not have. It also creates a lot of traffic, depending on how many Workers are out there. This is why we talked about using multicast to begin with.
I did some thinking about how to take these multicast (and therefore, UDP) packets and process them into my existing XML-RPC system. I thought about sending little "text functions" out on the network, saying "plugin.run("who@*","server",now,)" or something like that, which some listeners would then turn into XML-RPC calls to the Workers, who would then send Responses as normal. But that seemed so messy -- turning a text representation of the XML-RPC call into a true one. I thought about using the same method as we do between Plugins and Workers, since that too is a representation of an XML-RPC call, or most of it at least. After a little more thinking, I decided that what I would REALLY like is the same data that's passed over TCP to be passed over UDP. I want the same XML structure to be UDPized, yelled out over the network, and have listeners reTCPize it. This took away the onus of deforming/reforming XML-RPC calls. To do this I wrote a class called MProxy, which does a few things.
First, it listens on a TCP port (5019 by default, changed through the configuration file in the <network> section) for XML-RPC requests. Okay, it's not really an XML-RPC listener, in their terminology; it's just a server socket listening for anything. Whenever it gets a connection on that socket (which would be from a Server making a multicast Request, it bundles all the data sent on it into a multicast packet and sends it off on the multicast address (126.96.36.199 by default, overridden in the configuration file), on the multicast port (5016, just like the Worker default port on TCP). The "uncomfortable" part of MProxy is that it then send back an XML-RPC response to the caller -- so the Server that connected to MProxy (which is pretending to be an XML-RPC server) gets a valid response. I call this uncomfortable because I'm piping a raw string back to the Server:
HTTP/1.1 200 OK Server: Apache XML-RPC 1.0 Connection: close Content-Type: text/xml Content-Length: 143 <?xml version=\"1.0\" encoding=\"ISO-8859-1\"?> <methodResponse> <params> <param><value> <boolean>1</boolean> </value></param> </params> </methodResponse>I don't like doing this at all, but it works. Really, I should probably make MProxy a real XML-RPC server that listens for ANY call, returns a true response to all calls, and then sends it off on the multicast. Maybe version 2 will be like that.
All other instances of MProxy on the network will now hear this UDP version of an XML-RPC call that the Server's MProxy sent out -- HTTP header and everything. This is good, because the MProxy running on each Worker is going to take anything it hears on the multicast channel and stuff it directly to the XML-RPC server that the Worker is running. The result will be a well-formed XML-RPC call that has been passed through the multicast "cloud" and heard correctly by the XML-RPC server at the other end. Or, more, likely, by many XML-RPC servers.
I have a really pretty diagram on my whiteboard from when I was figuring this all out, but I don't think it'll draw nicely in ASCII, and I don't feel like using Microsoft Paint today. I'll add it I'm sure. In essence, though, we're accomplishing two things with MProxy -- we're enabling an anonymous endpoint to our XML-RPC call, by shouting it out over the network with multicast. The other half is the translation between TCP and UDP, and back again. Because of these two things, I thought MProxy was a suitable name -- multicast proxy.
And how well does it work? VERY well. I must say it's quite something to finally see a half-dozen Workers responding to a single request in my testbed. So what's left for an encore?
|<--next stage||^--Soigan--^||even more-->|