Processing

Please wait...

Settings

Settings

Goto Application

1. WO2021007561 - SYSTEM AND METHOD FOR REAL TIME CONTROL OF AN AUTONOMOUS DEVICE

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

CLAIMS

1. An autonomous delivery vehicle comprising:

a power base including two powered front wheels, two powered back wheels and energy storage, the power base configured to move at a commanded velocity and in a commanded direction to perform a transport of at least one object;

a cargo platform including a plurality of short-range sensors, the cargo platform mechanically attached to the power base;

a cargo container with a volume for receiving the at least one object, the cargo container mounted on top of the cargo platform;

a long-range sensor suite comprising LIDAR and one or more cameras, the long-range sensor suite mounted on top of the cargo container; and

a controller to receive data from the long-range sensor suite and the plurality of short-range sensors, the controller determining the commanded velocity and the commanded direction based at least on the data, the controller providing the commanded velocity and the commanded direction to the power base to complete the transport.

2. The autonomous delivery vehicle of claim 1 wherein the data from the plurality of short-range sensors comprise at least one characteristic of a surface upon which the power base travels.

3. The autonomous delivery vehicle of claim 1 wherein the plurality of short-range sensors comprises at least one stereo camera.

4. The autonomous delivery vehicle of claim 1 wherein the plurality of short-range sensors comprise at least one IR projector, at least one image sensor, and at least one RGB sensor.

5. The autonomous delivery vehicle of claim 1 wherein the plurality of short-range sensors comprises at least one radar sensor.

6. The autonomous delivery vehicle of claim 1 wherein the data from the plurality of short-range sensors comprise RGB-D data.

7. The autonomous delivery vehicle of claim 1 wherein the controller determines a geometry of a road surface based on RGB-D data received from the plurality of short-range sensors.

8. The autonomous delivery vehicle of claim 1 wherein the plurality of short-range sensors detect objects within 4 meters of the AV and the long-range sensor suite detects objects more than 4 meters from the autonomous delivery vehicle.

9. The autonomous delivery vehicle of claim 1 wherein the plurality of short-range sensors comprise a cooling circuit.

10. The autonomous delivery vehicle of claim 1 wherein the plurality of short-range sensors comprise an ultrasonic sensor.

11. The autonomous delivery vehicle of claim 2 wherein the controller comprises:

executable code, the executable code including:

accessing a map, the map formed by a map processor, the map processor comprising:

first processor accessing point cloud data from the long-range sensor suite, the point cloud data representing the surface;

a filter filtering the point cloud data;

a second processor forming processable parts from the filtered point cloud data;

a third processor merging the processable parts into at least one polygon;

a fourth processor locating and labeling the at least one substantially discontinuous surface feature (SDSF) in the at least one polygon, if present, the locating and labeling forming labeled point cloud data;

a fifth processor creating graphing polygons from the labeled point cloud data; and

a sixth processor choosing a path from a starting point to an ending point based at least on the graphing polygons, the AV traversing the at least one SDSF along the path.

12. The autonomous delivery vehicle as in claim 11 wherein the filter comprises:

a seventh processor executing code including:

conditionally removing points representing transient objects and points representing outliers from the point cloud data; and

replacing the removed points having a pre-selected height.

13. The autonomous delivery vehicle as in claim 11 wherein the second processor includes the executable code comprising:

segmenting the point cloud data into the processable parts; and

removing points of a pre-selected height from the processable parts.

14. The autonomous delivery vehicle as in claim 11 wherein the third processor includes the executable code comprising:

reducing a size of the processable parts by analyzing outliers, voxels, and normals; growing regions from the reduced-size processable parts;

determining initial drivable surfaces from the grown regions;

segmenting and meshing the initial drivable surfaces;

locating polygons within the segmented and meshed initial drivable surfaces; and setting at least one drivable surface based at least on the polygons.

15. The autonomous delivery vehicle as in claim 14 wherein the fourth processor includes the executable code comprising:

sorting the point cloud data of the initial drivable surfaces according to a SDSF filter, the SDSF filter including at least three categories of points; and

locating at least one SDSF point based at least on whether the at least three categories of points, in combination, meet at least one first pre-selected criterion.

16. The autonomous delivery vehicle as in claim 15 wherein the fourth processor includes the executable code comprising:

creating at least one SDSF trajectory based at least on whether a plurality of the at least one SDSF point, in combination, meet at least one second pre-selected criterion.

17. The autonomous delivery vehicle as in claim 14 wherein creating graphing polygons includes an eighth processor including the executable code comprising:

creating at least one polygon from the at least one drivable surface, the at least one polygon including exterior edges;

smoothing the exterior edges;

forming a driving margin based on the smoothed exterior edges;

adding the at least one SDSF trajectory to the at least one drivable surface; and removing interior edges from the at least one drivable surface according to at least one third pre-selected criterion.

18. The autonomous delivery vehicle as in claim 17 wherein the smoothing the exterior edges includes a ninth processor including the executable code comprising:

trimming the exterior edges outward forming outward edges.

19. The autonomous delivery vehicle as in claim 18 wherein forming the driving margin of the smoothed exterior edges includes a tenth processor including the executable code comprising:

trimming the outward edges inward.

20. The autonomous delivery vehicle as in claim 1 wherein the controller comprises:

a subsystem for navigating at least one substantially discontinuous surface feature (SDSF) encountered by the autonomous delivery vehicle (AV), the AV traveling a path over a surface, the surface including the at least one SDSF, the path including a starting point and an ending point, the subsystem comprising:

a first processor accessing a route topology, the route topology including at least one graphing polygon including filtered point cloud data, the filtered point cloud data including labeled features, the point cloud data including a drivable margin;

a second processor transforming the point cloud data into a global coordinate system;

a third processor determining boundaries of the at least one SDSF, the third processor creating SDSF buffers of a pre-selected size around the boundaries;

a fourth processor determining which of the at least one SDSFs can be traversed based at least on at least one SDSF traversal criterion;

a fifth processor creating an edge/weight graph based at least on the at least one SDSF traversal criterion, the transformed point cloud data, and the route topology; and a base controller choosing the path from the starting point to the ending point based at least on the edge/weight graph.

21. The autonomous delivery vehicle as in claim 20 wherein the at least one SDSF traversal criterion comprises:

a pre-selected width of the at least one and a pre-selected smoothness of the at least one SDSF;

a minimum ingress distance and a minimum egress distance between the at least one SDSF and the AV including a drivable surface; and

the minimum ingress distance between the at least one SDSF and the AV accommodating approximately a 90° approach by the AV to the at least one SDSF.

22. A method for managing a global occupancy grid for an autonomous device, the global occupancy grid including global occupancy grid cells, the global occupancy grid cells being associated with occupied probability, the method comprising:

receiving sensor data from sensors associated with the autonomous device;

creating a local occupancy grid based at least on the sensor data, the local occupancy grid having local occupancy grid cells;

if the autonomous device has moved from a first area to a second area,

accessing historical data associated with the second area;

creating a static grid based at least on the historical data;

moving the global occupancy grid to maintain the autonomous device in a central position of the global occupancy grid;

updating the moved global occupancy grid based on the static grid; marking at least one of the global occupancy grid cells as unoccupied, if the at least one of the global occupancy grid cells coincides with a location of the autonomous device;

for each of the local occupancy grid cells,

calculating a position of the local occupancy grid cell on the global occupancy grid;

accessing a first occupied probability from the global occupancy grid cell at the position;

accessing a second occupied probability from the local occupancy grid cell at the position; and

computing a new occupied probability at the position on the global occupancy grid based at least on the first occupied probability and the second occupied probability.

23. The method as in claim 22 further comprising:

range-checking the new occupied probability.

24. The method as in claim 23 wherein the range-checking comprises:

setting the new occupied probability to 0 if the new occupied probability <0; and setting the new occupied probability to 1 if the new occupied probability >1.

25. The method as in claim 22 further comprising:

setting the global occupancy grid cell to the new occupied probability.

26. The method as in claim 23 further comprising:

setting the global occupancy grid cell to the range-checked new occupied probability.

27. A method for creating and managing occupancy grids comprising:

transforming, by a local occupancy grid creation node, sensor measurements to a frame of reference associated with a device;

creating a time-stamped measurement occupancy grid;

publishing the time-stamped measurement occupancy grid as a local occupancy grid; creating a plurality of local occupancy grids;

creating a static occupancy grid based on surface characteristics in a repository, the surface characteristics associated with a position of the device;

moving a global occupancy grid associated with the position of the device to maintain the device and the local occupancy grid approximately centered with respect to the global occupancy grid;

adding information from the static occupancy grid to the global occupancy grid; marking an area in the global occupancy grid currently occupied by the device as unoccupied;

for each of at least one cell in each local occupancy grid,

determining a location of the at least one cell in the global occupancy grid; accessing a first value at the location;

determining a second value at the location based on a relationship between the first value and a cell value at the at least one cell in the local occupancy grid; comparing the second value against a pre-selected probability range; and setting the global occupancy grid with the new value if a probability value is within the pre-selected probability range.

28. The method as in claim 27 further comprising:

publishing the global occupancy grid.

29. The method as in claim 27 wherein the surface characteristics comprise surface type and surface discontinuities.

30. The method as in claim 27 wherein the relationship comprises summing.

31. A system for creating and managing occupancy grids comprising:

a plurality of local grid creation nodes creating at least one local occupancy grid, the at least one local occupancy grid associated with a position of a device, the at least one local occupancy grid including at least one cell;

a global occupancy grid manager accessing the at least one local occupancy grid, the global occupancy grid manager

creating a static occupancy grid based on surface characteristics in a repository, the surface characteristics associated with the position of the device, moving a global occupancy grid associated with the position of the device to maintain the device and at least one the local occupancy grid approximately centered with respect to the global occupancy grid;

adding information from the static occupancy grid to at least one global occupancy grid;

marking an area in the global occupancy grid currently occupied by the device as unoccupied;

for each of the at least one cell in each local occupancy grid, determining a location of the at least one cell in the global occupancy grid;

accessing a first value at the location;

determining a second value at the location based on a relationship between the first value and a cell value at the at least one cell in the local occupancy grid;

comparing the second value against a pre-selected probability range; and

setting the global occupancy grid with the new value if a probability value is within the pre-selected probability range.

32. A method for updating a global occupancy grid comprising:

if an autonomous device has moved to a new position, updating the global occupancy grid with information from a static grid associated with the new position;

analyzing surfaces at the new position;

if the surfaces are drivable, updating the surfaces and updating the global occupancy grid with the updated surfaces; and

updating the global occupancy grid with values from a repository of static values, the static values being associated with the new position.

33. The method as in claim 32 wherein updating the surfaces comprises:

accessing a local occupancy grid associated with the new position;

for each cell in the local occupancy grid,

accessing a local occupancy grid surface classification confidence value and a local occupancy grid surface classification;

if the local occupancy grid surface classification is the same as a global surface classification in the global occupancy grid in the cell, adding a global surface classification confidence value in the global occupancy grid to the local occupancy grid surface classification confidence value to form a sum, and updating the global occupancy grid at the cell with the sum;

if the local occupancy grid surface classification is not the same as the global surface classification in the global occupancy grid in the cell, subtracting the local occupancy grid surface classification confidence value from the global surface classification confidence value in the global occupancy grid to form a difference, and updating the global occupancy grid with the difference;

if the difference is less than zero, updating the global occupancy grid with the local occupancy grid surface classification.

34. The method as in claim 32 wherein updating the global occupancy grid with the values from the repository of static values comprises:

for each cell in a local occupancy grid,

accessing a local occupancy grid probability that the cell is occupied value, a logodds value, from the local occupancy grid;

updating the logodds value in the global occupancy grid with the local occupancy grid logodds value at the cell;

if a pre-selected certainty that the cell is not occupied is met, and if the autonomous device is traveling within lane barriers, and if a local occupancy grid surface classification indicates a drivable surface, decreasing the logodds that the cell is occupied in the local occupancy grid;

if the autonomous device expects to encounter relatively uniform surfaces, and if the local occupancy grid surface classification indicates a relatively non- uniform surface, increasing the logodds in the local occupancy grid; and

if the autonomous device expects to encounter relatively uniform surfaces, and if the local occupancy grid surface classification indicates a relatively uniform surface, decreasing the logodds in the local occupancy grid.

35. A method for real-time control of a configuration of a device, the device including a chassis, at least four wheels, a first side of the chassis operably coupled with at least one of the at least four wheels, and an opposing second side of the chassis operably coupled with at least one of the at least four wheels, the method comprising:

creating a map based at least on prior surface features and an occupancy grid, the map being created in non-real time, the map including at least one location, the at least one location associated with at least one surface feature, the at least one surface feature being associated with at least one surface classification and at least one mode;

determining current surface features as the device travels;

updating the occupancy grid in real-time with the current surface features;

determining, from the occupancy grid and the map, a path the device can travel to traverse the at least one surface feature.

36. A method for real-time control of a configuration of a device, the device including a chassis, at least four wheels, a first side of the chassis operably coupled with at least one of the at least four wheels, and an opposing second side of the chassis operably coupled with at least one of the at least four wheels, the method comprising:

receiving environmental data;

determining a surface type based at least on the environmental data;

determining a mode based at least on the surface type and a first configuration; determining a second configuration based at least on the mode and the surface type; determining movement commands based at least on the second configuration; and controlling the configuration of the device by using the movement commands to change the device from the first configuration to the second configuration.

37. The method as in claim 36 wherein the environmental data comprises RGB-D image data.

38. The method as in claim 36 further comprising:

populating an occupancy grid based at least on the surface type and the mode; and determining the movement commands based at least on the occupancy grid.

39. The method as in claim 38 wherein the occupancy grid comprises information based at least on data from at least one image sensor.

40. The method as in claim 36 wherein the environmental data comprises a topology of a road surface.

41. The method as in claim 36 wherein the configuration comprises two pairs of clustered of the at least four wheels, a first pair of the two pairs being positioned on the first side, a second pair of the two pairs being positioned on the second side, the first pair including a first front wheel and a first rear wheel, and the second pair including a second front wheel and a second rear wheel.

42. The method as in claim 41 wherein the controlling of the configuration comprises: coordinated powering of the first pair and the second pair based at least on the environmental data.

43. The method as in claim 41 wherein the controlling of the configuration comprises:

transitioning from driving the at least four wheels and a pair of casters retracted, the pair of casters operably coupled to the chassis, to driving two wheels with the clustered first pair and the clustered second pair rotated to lift the first front wheel and the second front wheel, the device resting on the first rear wheel, the second rear wheel, and the pair of casters.

44. The method as in claim 41 wherein the controlling of the configuration comprises: rotating a pair of clusters operably coupled with a first two powered wheels on the first side and a second two powered wheels on the second side based at least on the environmental data.

45. The method as in claim 36 wherein the device further comprises a cargo container, the cargo container mounted on the chassis, the chassis controlling a height of the cargo container.

46. The method as in claim 45 wherein the height of the cargo container being based at least on the environmental data.

47. A system for real-time control of a configuration of a device, the device including a chassis, at least four wheels, a first side of the chassis, and an opposing second side of the chassis, the system comprising:

a device processor receiving real-time environmental data surrounding the device, the device processor determining a surface type based at least on the environmental data, the device processor determining a mode based at least on the surface type and a first configuration, the device processor determining a second configuration based at least on the mode and the surface type; and

a powerbase processor determining movement commands based at least on the second configuration, the powerbase processor controlling the configuration of the device by using the movement commands to change the device from the first configuration to the second configuration.

48. The system as in claim 47 wherein the environmental data comprises RGB-D image data.

49. The system as in claim 47 wherein the device processor comprises populating an occupancy grid based at least on the surface type and the mode.

50. The system as in claim 49 wherein the powerbase processor comprises determining the movement commands based at least on the occupancy grid.

51. The system as in claim 49 wherein the occupancy grid comprises information based at least on data from at least one image sensor.

52. The system as in claim 47 wherein the environmental data comprises a topology of a road surface.

53. The system as in claim 47 wherein the configuration comprises two pairs of clustered of the at least four wheels, a first pair of the two pairs being positioned on the first side, a second pair of the two pairs being positioned on the second side, the first pair having a first front wheel and a first rear wheel, and the second pair having a second front wheel and a second rear wheel.

54. The system as in claim 53 wherein the controlling of the configuration comprises: coordinated powering of the first pair and the second pair based at least on the environmental data.

55. The system as in claim 53 wherein the controlling of the configuration comprises: transitioning from driving the at least four wheels and a pair of casters retracted, the pair of casters operably coupled to the chassis, to driving two wheels with the clustered first pair and the clustered second pair rotated to lift the first front wheel and the second front wheel, the device resting on the first rear wheel, the second rear wheel, and the pair of casters.

56. A method for maintaining a global occupancy grid comprising:

locating a first position of an autonomous device;

when the autonomous device moves to a second position, the second position being associated with the global occupancy grid and a local occupancy grid,

updating the global occupancy grid with at least one occupied probability value associated with the first position;

updating the global occupancy grid with at least one drivable surface associated with the local occupancy grid;

updating the global occupancy grid with surface confidences associated with the at least one drivable surface;

updating the global occupancy grid with logodds of the at least one occupied probability value using a first Bayesian function; and

adjusting the logodds based at least on characteristics associated with the second position; and

when the autonomous device remains in the first position and the global occupancy grid and the local occupancy grid are co-located,

updating the global occupancy grid with the at least one drivable surface associated with the local occupancy grid;

updating the global occupancy grid with the surface confidences associated with the at least one drivable surface;

updating the global occupancy grid with the logodds of the at least one occupied probability value using a second Bayesian function; and

adjusting the logodds based at least on characteristics associated with the second position.

57. The method as in claim 35 wherein creating the map comprises:

accessing point cloud data representing the surface;

filtering the point cloud data;

forming the filtered point cloud data into processable parts;

merging the processable parts into at least one concave polygon;

locating and labeling the at least one SDSF in the at least one concave polygon, the locating and labeling forming labeled point cloud data;

creating graphing polygons based at least on the at least one concave polygon; and choosing the path from a starting point to an ending point based at least on the graphing polygons, the AV traversing the at least one SDSF along the path.

58. The method as in claim 57 wherein the filtering the point cloud data comprises:

conditionally removing points representing transient objects and points representing outliers from the point cloud data; and

replacing the removed points having a pre-selected height.

59. The method as in claim 57 wherein forming processing parts comprises:

segmenting the point cloud data into the processable parts; and

removing points of a pre-selected height from the processable parts.

60. The method as in claim 57 wherein the merging the processable parts comprises: reducing a size of the processable parts by analyzing outliers, voxels, and normals; growing regions from the reduced-size processable parts;

determining initial drivable surfaces from the grown regions;

segmenting and meshing the initial drivable surfaces;

locating polygons within the segmented and meshed initial drivable surfaces; and setting at least one drivable surface based at least on the polygons.

61. The method as in claim 60 wherein the locating and labeling the at least one SDSF comprises:

sorting the point cloud data of the initial drivable surfaces according to a SDSF filter, the SDSF filter including at least three categories of points; and

locating at least one SDSF point based at least on whether the at least three categories of points, in combination, meet at least one first pre-selected criterion.

62. The method as in claim 61 further comprising:

creating at least one SDSF trajectory based at least on whether a plurality of the at least one SDSF point, in combination, meet at least one second pre-selected criterion.

63. The method as in claim 62 wherein the creating graphing polygons further comprises: creating at least one polygon from the at least one drivable surface, the at least one polygon including exterior edges;

smoothing the exterior edges;

forming a driving margin based on the smoothed exterior edges;

adding the at least one SDSF trajectory to the at least one drivable surface; and removing interior edges from the at least one drivable surface according to at least one third pre-selected criterion.

64. The method as in claim 63 wherein the smoothing of the exterior edges comprises: trimming the exterior edges outward forming outward edges.

65. The method as in claim 63 wherein forming the driving margin of the smoothed exterior edges comprises:

trimming the outward edges inward.

66. An autonomous delivery vehicle comprising:

a power base including two powered front wheels, two powered back wheels and energy storage, the power base configured to move at a commanded velocity;

a cargo platform including a plurality of short-range sensors, the cargo platform mechanically attached to the power base;

a cargo container with a volume for receiving a one or more objects to deliver, the cargo container mounted on top of the cargo platform;

a long-range sensor suite comprising LIDAR and one or more cameras, the long-range sensor suite mounted on top of the cargo container; and

a controller to receive data from the long-range sensor suite and the plurality of short-range sensors.

67. The autonomous delivery vehicle of claim 66 wherein the plurality of short-range sensors detect at least one characteristic of a drivable surface.

68. An autonomous delivery vehicle of claim 66 wherein the plurality of short-range sensors are stereo cameras.

69. The autonomous delivery vehicle of claim 66 wherein the plurality of short-range sensors comprise an IR projector, two image sensors and an RGB sensor.

70. The autonomous delivery vehicle of claim 66 wherein the plurality of short-range sensors are radar sensors.

71. The autonomous delivery vehicle of claim 66 wherein the short-range sensors supply RGB-D data to the controller.

72. The autonomous delivery vehicle of claim 66 wherein the controller determines a geometry of a road surface based on RGB-D data received from the plurality of short-range sensors.

73. The autonomous delivery vehicle of claim 66 wherein the plurality of short-range sensors detect objects within 4 meters of the autonomous delivery vehicle and the long-range sensor suite detects objects more than 4 meters from the autonomous delivery vehicle.

74. An autonomous delivery vehicle comprising:

a power base including at least two powered back wheels, caster front wheels and energy storage, the power base configured to move at a commanded velocity;

a cargo platform including a plurality of short-range sensors, the cargo platform mechanically attached to the power base;

a cargo container with a volume for receiving a one or more objects to deliver, the cargo container mounted on top of the cargo platform;

a long-range sensor suite comprising LIDAR and one or more cameras, the long-range sensor suite mounted on top of the cargo container; and

a controller to receive data from the long-range sensor suite and the plurality of short-range sensors.

75. The autonomous delivery vehicle of claim 74 wherein the plurality of short-range sensors detect at least one characteristic of a drivable surface.

76. The autonomous delivery vehicle of claim 74 wherein the plurality of short-range sensors are stereo cameras.

77. The autonomous delivery vehicle of claim 74 wherein the plurality of short-range sensors comprise an IR projector, two image sensors and an RGB sensor.

78. The autonomous delivery vehicle of claim 74 wherein the plurality of short-range sensors are radar sensors.

79. The autonomous delivery vehicle of claim 74 wherein the short-range sensors supply RGB-D data to the controller.

80. The autonomous delivery vehicle of claim 74 wherein the controller determines a geometry of a road surface based on RGB-D data received from the plurality of short-range sensors.

81. The autonomous delivery vehicle of claim 74 wherein the plurality of short-range sensors detect objects within 4 meters of the autonomous delivery vehicle and the long-range sensor suite detects objects more than 4 meters from the autonomous delivery vehicle.

82. The autonomous delivery vehicle of claim 74, further comprising a second set of powered wheels that may engage the ground, while the caster wheels are lifted off the ground.

83. An autonomous delivery vehicle comprising:

a power base including at least two powered back wheels, caster front wheels and energy storage, the power base configured to move at a commanded velocity;

a cargo platform the cargo platform mechanically attached to the power base; and a short-range camera assembly mounted to the cargo platform that detects at least one characteristic of a drivable surface, the short-range camera assembly comprising:

a camera;

a first light; and

a first liquid-cooled heat sink,

wherein the first liquid-cooled heat sink cools the first light and the camera.

84. The autonomous delivery vehicle according to claim 83, wherein the short-range camera assembly further comprises a thermal electric cooler between the camera and the liquid cooled heat sink.

85. The autonomous delivery vehicle according to claim 83, wherein the first light and the camera are recessed in a cover with openings that deflect illumination from the first light away from the camera.

86. The autonomous delivery vehicle according to claim 83, wherein the lights are angled downward by at least 15° and recessed at least 4mm in a cover to minimize illumination distracting a pedestrian.

87. The autonomous delivery vehicle according to claim 83, wherein the camera has a field of view and the first light comprises two LEDs with lenses to produce two beams of light that spread to illuminate the field of view of the camera.

88. The autonomous delivery vehicle according to claim 87, wherein the lights are angled approximately 50° apart and the lenses produce a 60° beam.

89. The autonomous delivery vehicle according to claim 83, wherein the short-range camera assembly includes an ultrasonic sensor mounted above the camera.

90. The autonomous delivery vehicle according to claim 83, where the short-range camera assembly is mounted in a center position on a front face of the cargo platform.

91. The autonomous delivery vehicle according to claim 83, further comprising at least one corner camera assembly mounted on at least one comer of a front face of the cargo platform, the at least one comer camera assembly comprising:

an ultra-sonic sensor

a comer camera;

a second light; and

a second liquid-cooled heat sink, wherein the second liquid-cooled heat sink cools the second light and the comer camera.

92. The method as in claim 22 wherein the historical data comprises surface data.

93. The method as in claim 22 wherein the historical data comprises discontinuity data.