Artificial Intelligence In Robotics Research Paper

  • Agin, G. J, [1980], “Computer vision systems for industrial inspection and assembly,” Computer, 13, 11–20.CrossRefGoogle Scholar

  • Ambler, A.P., and R.J. Popplestone, [1975], “Inferring the positions of bodies from specified spatial relationships,” Artificial Intelligence, 6, 2, 157–174.MathSciNetMATHCrossRefGoogle Scholar

  • Asada, H, [June, 1982], “A characteristics analysis of manipulator dynamics using principal transformations,” Proc. Amen Control Conf., Washington, D.C..Google Scholar

  • Asada, H, [1983], Proc. International Symposium of Robotics Research.Google Scholar

  • Asada, H. and T. Kanade, [Aug. 1981], “Design concept of direct-drive manipulators using rare-earth DC torque motors,” Proc. 7th Int. Joint. Conf. Artificial Intelligence, Vancouver, British Columbia, 775–778.Google Scholar

  • Baker H. Harlyn, and Binford T. O, [1981], “Depth from edge and intensity based stereo,” Int. Jt. Conf. Artif. Intel., 6,.Google Scholar

  • Binford T. O, [1981], “Inferring surfaces from images,” Artificial Intelligence, 17, 205–245.CrossRefGoogle Scholar

  • Boissonat, J. -D, [1982], “Stable matching between a hand structure and an object silhouette,” IEEE Patt. Anal. and Mach. Intell., PAMI-4, 603–611.CrossRefGoogle Scholar

  • Brady, Michael, [1982], Parts description and acquisition using vision, Robot vision Rosenfeld, A [ed]. Proc. SPIE, Washington D.C., 1–7.Google Scholar

  • Brady, Michael, [1983a], “Parallelism in vision,” Artificial Intelligence, to appear.Google Scholar

  • Brady, Michael, [1983b], Criteria for shape representations, Human and Machine vision, Beck J, and Rosenfeld A., eds., Academic Press.Google Scholar

  • Brady, Michael, [1983c], Trajectory planning, Robot motion: planning and control, Brady, Michael, Hollerbach, J. M., Johnson, T. J., Lozano-Perez, T., and Mason, M. T., MIT Press,.Google Scholar

  • Brady, Michael, [1983d], Representing shape, This volume.Google Scholar

  • Brady, Michael, and Asada, Haruo, [1983], Smoothed local symmetries and their implementation, Proc. First Int. Symp. Robotics Research.Google Scholar

  • Brady, Michael, and Yuille, Alan, [1983], An extremum principle for shape from contour, MIT, AI Lab., MIT-AIM 711.Google Scholar

  • Brooks, R.A, [1981], “Symbolic Reasoning Among 3-D Models and 2D Images,” Artificial Intelligence, 17, 285–348.CrossRefGoogle Scholar

  • Brooks, R. A, [1982], “Symbolic error analysis and robot planning,” Int. Journal of Robotics Research, 1 [4], 29–68.CrossRefGoogle Scholar

  • Brooks, R. A, [1983a], “Solving the findpath problem by good representation of free space,” IEEE Trans. Sys. Man and Cyb., SMC-13, 190–197.MathSciNetGoogle Scholar

  • Brooks, R. A, [1983b], “Planning collision free motions for pick and place operations,” International Journal of Robotics Research, 2 [4],.Google Scholar

  • Brooks, R. A., and Lozano-Pérez, Tomás, [1983], A subdivision algorithm in configuration space for findpath with rotation, Proc. Int. Jt. Conf. Artif. Intell. Karlsrühe.Google Scholar

  • Bruss A., and Horn, B. K. P, [1981], Passive Navigation, MIT, AI Memo 662.Google Scholar

  • Bundy, Alan, et. al, [1979], Solving mechanics problems using meta-level inference, Expert systems in the microelectronic age, Michie, D. [ed.], Edinburgh Univ. Press.Google Scholar

  • Canny, J. F, [1983 Sept], Finding lines and edges in images, Proc. AAAI Conf., Washington, DC.Google Scholar

  • Canny, J. F, [1983], Finding lines and edges in images, MIT.Google Scholar

  • Clocksin, W. E., et al, [1982], Progress in visual feedback for arc-welding of thin sheet steel, Robot Vision, Pugh, Alan ed., IFS.Google Scholar

  • Davis, Larry S. and Rosenfeld Azriel, [1981], “Cooperating processes for low-level vision: a survey,” Artificial Intelligence, 17, 245–265.CrossRefGoogle Scholar

  • DeKleer, J, [1975], Qualitative and quantitative knowledge in classical mechanics, MIT Artificial Intelligence Laboratory, AI-TR-352.Google Scholar

  • Faugeras, O. et. al, [1982], Towards a flexible vision system, Robot Vision, Pugh, Alan ed., IFS.Google Scholar

  • Featherstone, R, [1983], “Position and velocity transformations between robot end effector coordinates and joint angles,” The International Journal of Robotics Research, 2[2],.Google Scholar

  • Forbus, K. D, [1983], Qualitative process theory, MIT Artificial Intelligence Laboratory AIM-664A.Google Scholar

  • Franklin, James W., and VanderBrug, G. J, [March, 1982], Programming vision and robotics system with RAIL, Robots VI Conf., Detroit, SME.Google Scholar

  • Gaston, Peter C, and Lozano-Pérez, Tomás, [1983], Tactile recognition and localization using object models: the case of polyhedra on a plane, MIT Artificial Intelligence Lab. AIM-705.Google Scholar

  • Goto, T., K. Takeyasu, and T. Inoyama, [1980], “Control algorithm for precision insert operation robots,” IEEE Trans. Systems, Man, Cybernetics, SMC-10, 1, 19–25.Google Scholar

  • Grimson, W. E. L, [1981], From images to surfaces: a computational study of the human early visual system, MIT Press, Cambridge.Google Scholar

  • Hackwood, S., and Beni, [1983], “Torque sensitive tactile array for robotics,” Int. Jour. Robotics Research, 2 [2],.Google Scholar

  • Haralick, Robert M., Watson, Layne T., and Laffcy, Thomas J, [1983], “The topographic primal sketch,” The International Journal of Robotics Research, 2 [1], 50–72.CrossRefGoogle Scholar

  • Harmon L, [1982], “Automated Tactile Sensing,” Int. Jour. Robotics Research, 1 [2], 3–33.CrossRefGoogle Scholar

  • Hildreth, E, [1983], The measurement of visual motion, MIT. Artificial Intelligence Laboratory.Google Scholar

  • Hillis, W. Daniel, [1982], “A high-resolution image touch sensor,” Int. Jour. Robotics Research, 1 [2], 33–44.CrossRefGoogle Scholar

  • Hollerbach, J. M, [1983], Dynamics, Robot motion: planning and control, Brady, Michael, Hollerbach, J. M., Johnson, T. J., Lozano-Perez, T., and Mason, M. T., MIT Press,.Google Scholar

  • Hollerbach, J. M., and Sahar, Gideon, [1983], Wrist partitioned inverse kinematic accelerations and manipulator dynamics, MIT Artificial Intelligence Laboratory, AIM-717.Google Scholar

  • Horn B. K. P, [1982], Sequins and Quills — Representations for Surface Topography, Representation of 3-Dimensional Objects ed. Bajcsy R., Springer Verlag.Google Scholar

  • Hopcroft, J. E., Schwartz, J. T., and Sharir M, [1983], “Efficient detection of intersections among spheres,” The International Journal of Robotics Research, 2 [4].Google Scholar

  • Horn and Schunck, [1982], “Determining Optical Flow,” Artificial Intelligence, 17, 185–203.CrossRefGoogle Scholar

  • Ikeuchi K, [1981], “Determination of surface orientations of specular surfaces by using the photometric stereo method,” IEEE [accepted for publication],,.Google Scholar

  • Ikeuchi K. and Horn B. K. P, [1981], “Numerical shape from shading and occluding boundaries,” Artificial Intelligence, 17, 141–185.CrossRefGoogle Scholar

  • Ikeuchi, K., and Horn, B. K. P, [1983],, Proc. First Int. Symp. Robotics Research.Google Scholar

  • Lieberman, L.I., and M.A. Wesley, [1977], “AUTOPASS: an automatic programming system for computer controlled mechanical assembly,” IBM J. Research Development, 21, 4, 321–333.CrossRefGoogle Scholar

  • Lowe, D. G., and Binford, T. O, [1982], Segmentation and aggregation: an approach to figure-ground phenomena, Proc. Image Understanding Workshop, Baumann Lee S. [ed.], Sci. App. Inc. Tysons Corner Va., 168–178.Google Scholar

  • Lozano-Perez, T, [1976], The design of a mechanical assembly system, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, AI TR 397.Google Scholar

  • Lozano-Pérez, Tomás I, [1981], “Automatic planning of manipulator transfer movements,” IEEE Trans. Sys., Man and Cyb., SMC-11, 681–698.CrossRefGoogle Scholar

  • Lozano-Pérez, Tomás I, [1983a], “Spatial planning: a configuration space approach,” IEEE Trans. Comp., C-32, 108–120.CrossRefGoogle Scholar

  • Lozano-Pérez, Tomás, [1983b], Robot programming, MIT Artificial Intelligence Laboratory, AIM-698.Google Scholar

  • Lozano-Pérez, Tomás, Mason, Matthew T., and Taylor, R. H, [1983c], Automatic Synthesis of fine-motion strategies for robots, Proc. International Symposium of Robotics Research.Google Scholar

  • Lozano-Pérez, Tomás, [1983d], Spatial Reasoning, Robot motion: planning and control, Brady, Michael, Hollerbach, J. M., Johnson, T. J., Lozano-Perez, T., and Mason, M. T., MIT Press,Google Scholar

  • Lozano-Perez, T. L., and Crimson, W. E. L, [1983], Local constraints in tactile recognition, MIT Artificial Intelligence Laboratory.Google Scholar

  • Lozano-Perez, T. L., Mason, T. M., and Taylor, R. H, [1983],, Proc. First Int. Symp. Robotics Research.Google Scholar

  • Marr, D, [1982], Vision, Freeman, San Francisco.Google Scholar

  • Marr, D. and Hildreth, E.C, [1980], “Theory of Edge Detection,” Proc. R. Soc. Lond. B, 270, 187–217.CrossRefGoogle Scholar

  • Marr D. and Poggio T, [1979], “A theory of human stereo vision,” Proc. R. Soc. Lond. B, 204, 301–328.CrossRefGoogle Scholar

  • Mason, T. M, [reprinted in Robot motion: planning and control Brady, Michael, Hollerbach, J. M., Johnson, T. J., Lozano-Perez, T., and Mason, M. T., MIT Press] [1981], “Compliance and force control for computer controlled manipulators,” IEEE Trans. Sys. Man and Cyb., SMC-11, 418–432.CrossRefGoogle Scholar

  • Mason, T. M, [1983], Compliance, Robot motion: planning and control, Brady, Michael, Hollerbach, J. M., Johnson, T. J., Lozano-Perez, T., and Mason, M. T., MIT Press,.Google Scholar

  • Paul, R.P, [1981], Robot Manipulators: Mathematics, Programming, and Control, MIT Press, Cambridge, Mass..Google Scholar

  • Pieper, D.L, [1968], The Kinematics of Manipulators under Computer Control, Ph.D. thesis, department of Computer Science, Stanford University.Google Scholar

  • Pieper, D.L., and B. Roth, [September 1969], “The kinematics of manipulators under computer control,” Proc. 2nd Int. Conf. Theory of Machines and Mechanisms, Warsaw.Google Scholar

  • Popplestone, R. J., Ambler, A. P., and Bellos, I. M, [1980], “An interpreter for a language for describing assemblies,” Artificial Intelligence, 14, 79–107.CrossRefGoogle Scholar

  • Porter, G., and Mundy, J, [1982], “A non-contact profile sensor system for visual inspections,” IEEE Workshop on Ind. Appl. of Mach. Vis.,,.Google Scholar

  • Raibert, M. H., and Craig, J. J, [1983], A hybrid force and position controller, Robot motion: planning and control, Brady, Michael, Hollerbach, J. M., Johnson, T. J., Lozano-Perez, T., and Mason, M. T., MIT Press,.Google Scholar

  • Raibert, Marc H., and Tanner, John E, [1982], “Design and implementation of a VLSI tactile sensing computer,” Int. Jour. Robotics Research, 1 [3], 3–18.CrossRefGoogle Scholar

  • Requicha, A. A. G, [December, 1980], “Representation of Rigid Solids: Theory, Methods, and Systems,” Computing Surveys, 12, 4, 437–464.CrossRefGoogle Scholar

  • Rich, C. R., and Waters, R, [1981], Abstraction, inspection, and debugging in programming, MIT Artificial Intelligence Laboratory, AIM-634.Google Scholar

  • Sacerdoti, E, [1975], A structure for plans and behavior, SRI Artificial Intelligence Center TR-109.Google Scholar

  • Salisbury, J.K, [1982], Kinematic and Force Analysis of Articulated Hands, Ph.D. thesis, department of Mechanical Engineering, Stanford University.Google Scholar

  • Salisbury, J.K., and J.J. Craig, [1982], “Articulated hands: force control and kinematic issues,” Int. J. Robotics Research, 1, 1, 4–17.CrossRefGoogle Scholar

  • Schunck, B. G, [1983], Motion segmentation and estimation, MIT Artificial Intelligence Laboratory.Google Scholar

  • Schwartz, Jacob T., and Sharir, Micha, [1983], “The piano movers problem III,” The International Journal of Robotics Research, 2 [3].Google Scholar

  • Taylor, R.H, [July, 1976], The synthesis of manipulator control programs from task-level specifications, Artificial Intelligence Laboratory, Stanford University, AIM-282.Google Scholar

  • Taylor, R. H., Summers, P. D., and Meyer J. M, [1982], “AML: a manufacturing language,” The International Journal of Robotics Research, 1[3], 19–41.CrossRefGoogle Scholar

  • Terzopoulos, D, [1983], “Multi-level reconstruction of visual surfaces,” Computer Graphics and Image Processing.Google Scholar

  • VAL, [1980], User’s guide: a robot programming and control system, CONDEC Unimation Robotics.Google Scholar

  • Villers, Philippe, [1982], Present industrial use of vision sensors for robot guidance, Robot Vision, Pugh, Alan ed., IFS.Google Scholar

  • Vilnrotter F., Nevatia R., and Price K. E, [1981], Structural analysis of natural textures, Proc. Image Understanding Workshop ed. Lee Baumann S., 61–68.Google Scholar

  • Wesley, M. A. et al, [January, 1980], “A Geometric Modeling System for Automated Mechanical Assembly,” IBM J. Research and Development, 24, 1, 64–74.MathSciNetCrossRefGoogle Scholar

  • Whitney, D. E, [1983], The mathematics of compliance, Robot motion: planning and control, Brady, Michael, Hollerbach, J. M., Johnson, T. J., Lozano-Perez, T., and Mason, M. T., MIT Press,Google Scholar

  • Winston, Patrick H, [1983], Artificial Intelligence, Second Edition, Addison Wesley, Reading: Mass..Google Scholar

  • Winston, Patrick H., Binford, Thomas O., Katz, Boris, and Lowry, Michael, [1983], Learning physical descriptions from functional descriptions, examples, and precedents, MIT Artificial Intelligence Laboratory, AIM-679.Google Scholar

  • Witkin, Andrew P, [1981], “Recovering surface shape and orientation from texture,” Artificial Intelligence, 17, 17–47.CrossRefGoogle Scholar

  • Zucker S. W., Hummel R. A., and Rosenfeld Azriel, [1977], “An application of relaxation labelling to line and curve enhancement,” IEEE Trans. Computers, C-26, 394–403, 922–929.CrossRefGoogle Scholar

  •  

    more information than passive sensors. But they alsoconsume more power. This can lead to a problem onmobile robots which need to take their energy withthem in batteries. We have three types of sensors (nomatter whether sensors are active or passive). Theseare sensors that either record distances to objects or generate an entire image of the environment or measure a property of the robot itself. Many mobilerobots make use of 

    range finders

    , which measuredistance to nearby objects. A common type is thesonar sensor. Alternatives to sonar include radar andlaser. Some range sensors measure very short or verylong distances. Close-range sensors are often

    tactile sensors

    such as whiskers, bump panels and touch-sensitive skin. The other extreme are long-rangesensors like the Global Positioning System (GPS).The second important class of sensors is

    imaging  sensors

    . These are cameras that provide images of the environment that can then be analyzed usingcomputer vision and image recognition techniques.The third important class is

     proprioceptive sensors

    .These inform the robot of its own state. To measurethe exact configuration of a robotic joint motors areoften equipped with shaft decoders that count therevolution of motors in small increments. Another way of measuring the state of the robot is to use forceand torque sensors. These are especially needed whenthe robot handles fragile objects or objects whoseexact shape and location is unknown. Imagine a tonrobot manipulator screwing in a light bulb.D. Effectors

     Effectors

    are the means by which robots manipulatethe environment, move and change the shape of their  bodies. To understand the ability of a robot to interactwith the physical world we will use the abstractconcept of a

    degree of 

     freedom (DOF)

    . We count onedegree of freedom for each independent direction inwhich a robot, or one of its effectors can move. As anexample lets contemplate a rigid robot like anautonomous underwater vehicle (AUV). It has sixdegrees of freedom, three for its (

     x;y;z 

    ) location inspace and three for its angular orientation (alsoknown as yaw, roll and pitch). These DOFs definethe kinematic state of the robot. This can be extendedwith another dimension that gives the rate of changeof each kinematic dimension. This is called dynamicstate. Robots with non rigid bodies may haveadditional DOFs. For example a human wrist hasthree degrees of freedom – it can move up and down,side to side and can also rotate. Robot joints have 1,2, or 3 degrees of freedom each. Six degrees of freedom are required to place an object, such as ahand, at a particular point in a particular orientation.The manipulator shown in Figure1 has exactly sixdegrees of freedom, created by five revolute joints(R) and one prismatic joint (P). Revolute jointsgenerate rotational motion while the prismatic jointsgenerate sliding motion. If you take your arm as anexample you will notice, that it has more than sixdegrees of freedom. If you put your hand on the tableyou still have the freedom to rotate your elbow.Manipulators which have more degrees of freedomthan required to place an end effector to a targetlocation are easier to control than robots having onlythe minimum number of DOFs. Mobile robots aresomewhat special. The number of degrees of freedomdoes not need to have corresponding actuatedelements. Think of a car. It can move forward or  backward, and it can turn, giving it two DOFs. But if you describe the car’s kinematic configuration youwill notice that it is three-dimensional. On a flatsurface like a parking site you can maneuver your car to any (

     x;y

    ) point, in any orientation. You see that thecar has 3

    effective DOFs

    but only 2

    controllable DOFs

    . We say a robot is

    nonholonomic

    if it has moreeffective DOFs than controllable DOFs and

    holonomic

    if the two numbers are the same.Holonomic robots are easier to control thannonholonomic (think of parking a car: it would bemuch easier to be able to move the car sideways). Butholonomic robots are mechanically more complex.Most manipulators and robot arms are holonomic andmost mobile robots are nonholonomic. [1]Computer scienceAI researchers have created many tools to solve themost difficult problems in computer science. Many of their inventions have been adopted by mainstreamcomputer science and are no longer considered a partof AI.FinanceBanks use artificial intelligence systems to organizeoperations, invest in stocks, and manage properties.In August 2001, robots beat humans in a simulatedfinancial trading competition. Financial institutionshave long used artificial neural network systems todetect charges or claims outside of the norm, flaggingthese for human investigation.Heavy industryRobots have become common in many industries.They are often given jobs that are considereddangerous to humans. Robots have proven effectivein jobs that are very repetitive which may lead tomistakes or accidents due to a lapse in concentrationand other jobs which humans may find degrading.Japan is the leader in using and producing robots inthe world. In 1999, 1,700,000 robots were in useworldwide. For more information, see survey aboutartificial intelligence in business.

    0 Thoughts to “Artificial Intelligence In Robotics Research Paper

    Leave a comment

    L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *