
Folks: This week's OSU Robotics seminar will be presented by our own robotics students. The seminar will be an *in person* event and simulcast over Zoom. Friday April 29, 10—11 AM LINC 268 https://oregonstate.zoom.us/j/93908195212?pwd=cEdoQVdac0JxMUpOSS9xVzZjVG5xUT... Here are the speakers and the topics: Helei Duan Time required: 4 min + questions Title: Sim-to-Real Learning of Footstep-Constrained Bipedal Dynamic Walking Recently, work on reinforcement learning (RL) for bipedal robots has successfully learned controllers for a variety of dynamic gaits with robust sim-to-real demonstrations. In order to maintain balance, the learned controllers have full freedom of where to place the feet, resulting in highly robust gaits. In the real world however, the environment will often impose constraints on the feasible footstep locations, typically identified by perception systems. Unfortunately, most demonstrated RL controllers on bipedal robots do not allow for specifying and responding to such constraints. This missing control interface greatly limits the real-world application of current RL controllers. In this paper, we aim to maintain the robust and dynamic nature of learned gaits while also respecting footstep constraints imposed externally. We develop an RL formulation for training dynamic gait controllers that can respond to specified touchdown locations. We then successfully demonstrate simulation and sim-to-real performance on the bipedal robot Cassie. In addition, we use supervised learning to induce a transition model for accurately predicting the next touchdown locations that the controller can achieve given the robot's proprioceptive observations. This model paves the way for integrating the learned controller into a full-order robot locomotion planner that robustly satisfies both balance and environmental constraints. Alexander You Time required: 4 min + questions Precision fruit tree pruning using a learned hybrid vision/interaction controller Robotic tree pruning requires highly precise manipulator control in order to accurately align a cutting implement with the desired pruning point at the correct angle. Simultaneously, the robot must avoid applying excessive force to rigid parts of the environment such as trees, support posts, and wires. In this paper, we propose a hybrid control system that uses a learned vision-based controller to initially align the cutter with the desired pruning point, taking in images of the environment and outputting control actions. This controller is trained entirely in simulation, but transfers easily to real trees via a neural network which transforms raw images into a simplified, segmented representation. Once contact is established, the system hands over control to an interaction controller that guides the cutter pivot point to the branch while minimizing interaction forces. With this simple, yet novel, approach we demonstrate an improvement of over 30 percentage points in accuracy over a baseline controller that uses camera depth data. Capprin Bass Time required: 4min + questions Title: Characterizing Error in Noncommutative Geometric Gait Analysis Authors: Capprin Bass and Ross L. Hatton Abstract: A key problem in robotic locomotion is in finding optimal shape changes to effectively displace systems through the world. Variational techniques for gait optimization require estimates of body displacement per gait cycle; however, these estimates introduce error due to unincluded high order terms. In this paper, we formulate existing estimates for displacement, and describe the contribution of low order terms to these estimates. We additionally describe the magnitude of higher (third) order effects, and identify that choice of body coordinate, gait diameter, and starting phase influence these effects. We demonstrate that variation of such parameters on two example systems (the differential drive car and Purcell swimmer) effectively manages third order contributions. Jeremy Dao Time required: 4 min + questions Title: Sim-to-Real Learning for Bipedal Locomotion Under Unsensed Dynamic Loads Authors: Jeremy Dao, Kevin Green, Helei Duan, Alan Fern, Jonathan Hurst Abstract: Recent work on sim-to-real learning for bipedal locomotion has demonstrated new levels of robustness and agility over a variety of terrains. However, that work, and most prior bipedal locomotion work, have not considered locomotion under a variety of external loads that can significantly influence the overall system dynamics. In many applications, robots will need to maintain robust locomotion under a wide range of potential dynamic loads, such as pulling a cart or carrying a large container of sloshing liquid, ideally without requiring additional load-sensing capabilities. In this work, we explore the capabilities of reinforcement learning (RL) and sim-to-real transfer for bipedal locomotion under dynamic loads using only proprioceptive feedback. We show that prior RL policies trained for unloaded locomotion fail for some loads and that simply training in the context of loads is enough to result in successful and improved policies. We also compare training specialized policies for each load versus a single policy for all considered loads and analyze how the resulting gaits change to accommodate different loads. Finally, we demonstrate sim-to-real transfer, which is successful but shows a wider sim-to-real gap than prior unloaded work, which points to interesting future research. Tim Player Time required: 10 min + questions Title: Closed-Loop Generative Grasping with Temperospatial Sparse Convolution Authors: Tim Player, Dongsik Chang, Fuxin Li, Geoffrey Hollinger Abstract: Robotic grasp synthesis on unseen objects is a crucial skill that enables the application of generalized robotics to manufacturing, home service, and scientific applications. While state-of-the-art robotic grasping systems achieve reliable performance grasping previously unseen objects in laboratory conditions, robots deployed in real-world environments must continue to operate when visual conditions are degraded or when environments are dynamic. Motivated by the problem of underwater sample collection, we propose a novel algorithm that identifies 6-DOF grasp poses in a streaming depth video in real time, enabling operation in dynamic environments through closed loop control. Our technique uses temperospatial sparse convolution to process information from consecutive depth images in parallel, improving robustness to temporary occlusions. Our preliminary results indicate that the proposed grasp synthesis method is suitable for closed loop grasping. ----- Ravi Balasubramanian (he/him/his) Associate Professor Collaborative Robotics and Intelligent Systems (CoRIS) Institute School of Mechanical, Industrial, and Manufacturing Engineering Oregon State University Corvallis, OR 97331. ravi.balasubramanian@oregonstate.edu<mailto:ravi.balasubramanian@oregonstate.edu> <mailto:ravi.balasubramanian@oregonstate.edu><mailto:ravi.balasubramanian@oregonstate.edu> Graf Hall 315 Ph#: 541-737-4267 http://web.engr.oregonstate.edu/~balasubr/ Zoom: https://oregonstate.zoom.us/j/7036813309 Meeting password: robotics OSU Robotics: http://coris.oregonstate.edu/