Subject: RE: The submarine task you guys did Date: Mon, 15 Nov 1999 12:26:18 -0800 (PST) > Date: Nov 15 Subject: "RE: The submarine task you guys did" > > In addition to the paper ____ mentioned, a paper was generated for a > submarine technology symposium out here in 1998. It has a high level > overview of the layered planning architecture, and an interesting discussion > of the application domain. > > ____: Please keep me in the loop if you do anything with this. There is > always the potential for some outside sponsorship. I'm also available for > further discussion, or to answer questions if necessary. > > ___ > > <> The Generative Layer as described in 2.4.2 of STS 98-Final.doc, is closest to the work I'm interested in doing. I'm particularly interested in the modeling work represented in Figure 6 (TPA Path Planner MMI). How is uncertainty about ship and sub positions represented? How do such uncertainties evolve over time? How is information gained to reduce uncertainty and how is it incorporated in the representation? What are the typical tasks for the domain? I would like very much to be in contact with someone close to this aspect of the work, and would certainly keep you in the loop if I'm able to use this problem in my research. Thanks very much for the paper. Subject: RE: JHU/APL POC Date: Fri, 10 Dec 1999 13:34:05 -0800 (PST) In particular, I'm interested in applying various search techniques to the simple problem scenario used in Figure 6 ("TPA Path Planner MMI") of: Smith, Jacobus, and Watson. "Preparing to do Tomorrow's Job Today - Automated Tactical and Mission Planning Assistance for the Information Age Submarine" I'd like details of the generative layer problem description and best-first search algorithm. How is the continuous problem discretized in order to be able to apply discrete search? What are the state space, operators, and evaluation function used to evaluate a sequence of operations in the state space? How does the state space evolve autonomously? (I.e. what is the simple model used for enemy ship/sub movement? How is position uncertainty updated?) The simpler the problem description the better. I'd be happy to pass along my results, but they won't be useful unless they are applied to a comparable problem. I could make guesses as to what's in Figure 6, but I'm afraid I'd end up comparing apples to oranges. Thanks, Subject: Re: FW: JHU/APL POC Date: Fri, 10 Dec 1999 16:23:53 -0800 Recursive Best First Search (RBFS) was published by Rich Korf. I think this is the paper reference, which I pulled from his web page: Linear-space best-first search, Artificial Intelligence, Vol. 62, No. 1, July 1993, pp. 41-78. His web page is: http://www.cs.ucla.edu/~korf/publications.html You could e-mail him directly to get an electronic copy of it; he's always very good about sending them out, as he says on his web page: "If you're interested in any of these papers, please send email to korf@cs.ucla.edu, and I'll be happy to send you copies." As far as the discretization goes, off the top of my head, I think the way we did it was to discretize velocity - I believe we had a stop, slow, and full speed setting for each vessel, and a directional discretization as well. Of course the actual numerical speed used for each level varied depending on the particular platform. The directional discretization level was an input parameter - you could slice the 360 degrees into N different directions. That way, you had some control over the branching factor in the search, and thus could trade off the speed of the algorithm versus granularity of the solution. The evaluation function had to do with how far ownship was from the next goal, combined with an estimate of detection penalty. We had a fairly crude but interesting model of detection probability - each enemy had an outer radius, beyond which the probability was zero. There was also an inner radius distance on each enemy, and if ownship was within that distance, the probability was considered to be 1 (which then caused a heavy penalty in the evaluation function). Between the two radii, the probability of detection had a non-linear slope, increasing more and more the closer ownship came to the actual position of the enemy vessel. We used to call it a donut model with a "smooshy" region in between the radii. :) Enemy movement worked in two ways. Each enemy agent could perform scripted maneuvers: ie a ladder search, or back and forth, or patrol square, etc. We could combine behaviors as a function of time, so an enemy could zigzag for a little while, then start a ladder search, then go straight, etc. But it wasn't reactive. The second and much more interesting enemy agents were controled using the JAM procedural reasoning system. With JAM, we could encode behaviors like before, but we could also make the enemies proactive and reactive. For example, if ownship strayed too close to an enemy vessel, and the enemy detected it, the enemy could launch a torpedo and then execute a fleeing maneuver, or pursuit maneuver. The generative layer did not plan out this sort of occurance, of course. It made its plans pretty much assuming the enemies would not detect ownship and would continue their observed behaviors. It was the middle and reactive layers' jobs to plan on the fly, especially for weapon in the water scenarios. I do not think we did any modeling of position uncertainty, other than the fact that the donut model itself provides it in a crude sort of way. Well, I think I've addressed your points. Feel free to ask any other questions. Subject: RE: FW: JHU/APL POC Date: Tue, 14 Dec 1999 14:51:26 -0500 I've looked up the values, and ____, please correct me if I'm mistaken: Input values to the search appear to be 8 discrete courses heuristic weight (h_w) = 1.75 The cost of a state (position and time) is given by f(s) = h_w * h(s) + g(s) and g(s) = cost of current state = time to state (t) + a scaled intersect value [0.0 , 1.0] based on location with respect to the obstacles (contacts) in the system. Each obstacle has an inner and outer radius, for which if a state exists outside the outer radius, the intersect value is 0.0, if inside the inner radius, a very large value. A non-linear gaussian function is used to interpolate the intersection value for the obstacle based on an avoidance value "a" (which could be unique for every contact), the distance to the inner radius "x", and half the distance between the two radii "s". This value is then scaled by the current time (t). The cost function g(s) is as follows (for state with position p, at time t): 1) p inside inner radius: VERY LARGE NUMBER 2) p outside outer radius: t + 0.0 3) t + a * t * exp (-x^2 * log (2) / sig^2 ) The heuristic used is a measure of the time to the goal (which may possibly be moving) based on current location, time, and a max speed. The magnitude of the velocities are based on an input parameter update speed: STOP: 0 HALF: 0.5 * update FULL: 1.0 * update Hope this helps,