CVPR 2019 Best Paper Award Winner: Shumian Xin & Ioannis Gkioulekas @ Carnegie Mellon University

Robin.ly interviews the CVPR 2019 Best Paper Award winner recipients Shumian Xin and Ioannis Gkioulekas of Carnegie Mellon University. They will share their experience, research, and insights on working on this paper.

By
Robin.ly
on
October 21, 2020
Category:
Research Spotlights

Our CVPR 2019 special AI talk features Shumian Xin and Prof. Ioannis Gkioulekas, co-authors of “A Theory of Fermat Paths for Non-Line-of-Sight Shape Reconstruction”, and winners of the CVPR 2019 Best Paper Award. Shumian is a 2nd-year PhD student and Ioannis is an assistant professor, both from the Robotics Institute of Carnegie Mellon University.


They shared the details on the impressive results found from their recent research project, especially the importance of the collaborative experience within the development process, her enthusiasm about computational imagery, and the comparison with LiDAR sensors.

CVPR 2019 Best Paper Award Winner: Shumian Xin & Ioannis Gkioulekas @ Carnegie Mellon University
Robin.ly CVPR 2019 Interview with Shumian Xin and Ioannis Gkioulekas, best paper award winners

Their research focuses on reconstructing unseen objects around corners and translucent filters using specific light sources and sensors. For instance they detail how this technology can be used to improve the awareness of autonomous driving vehicles via its ability to detect objects which are not within a conventional line of sight (i.e. objects around a corner or bend), as well as perform minimally invasive surgery to gather information from inside the body. This is possible through the use of rapid lasers reflecting against walls to reveal obscure objects.


The awards committee has described this as “both a beautiful paper theoretically as well as inspiring”, adding that it “continues to push the boundaries of what is possible in computer vision”.  

Watch the Complete Interview Here:

CVPR 2019 Best Paper Award Winner: Shumian Xin & Ioannis Gkioulekas @ Carnegie Mellon University

Interview Transcripts

Host: Shumian and Professor Ioannis, thank you for joining us. Shumian, congratulations on winning this year's Best Paper Award at CVPR.


Shumian: Thank you. Thank you for inviting me for this interview.


Ioannis: Thank you for inviting me.


Host: We’re delighted that both of you can join us today. Can you briefly introduce yourselves?


Shumian: Yeah, sure. I'm Shumian. I'm a second year PhD student at Carnegie Mellon University Robotics Institute. I work with Srinivasa and Ioannis on NLOS (non-line-of-sight) imaging. That's what this paper is about.


Ioannis: Hi, everyone. I'm Ioannis Gkioulekas. I am an Assistant Professor at Carnegie Mellon University Robotics Institute. I've been there for a couple of years. And I work on computational imaging and computer vision.


Host: So the paper today that won the award is called “A Theory of Fermat Paths for Non-Line-of-Sight Shape Reconstruction”. Can you give us a quick summary of your paper?


Shumian: Okay. The problem we want to solve in this work is to reconstruct objects that are blocked by some occluders, and out of the field of view of the camera or the sensor. The way we do it is that we will look at some other surfaces, like a wall and that wall is going to give us some information through reflections of that wall, it's going to give us some information about the non-line-of-sight object, then we use a time-of-flight sensor to collect data from there, and we use those kinds of time-of-flight information to reconstruct the NLOS shape.


Host: How long did it take you to work on this paper with your team?


Shumian: I've been working on this problem for the past two years. But during those two years, we tried different kinds of things. And this is what we currently come up with. And I will continue working on this problem for a while.


Host: So what are the most important contributions of this work?


Shumian: The quality of reconstruction of the none-line-of-sight object that we have, I would say pretty close to the reconstruction you would get from line-of-sight settings, where your camera can directly see your object. So it's pretty exciting to see those none-line-of-sight reconstructions are starting to look like line-of-sight reconstruction. It's as if we are making the entire world specular or like a mirror so that we can reconstruct every object from anywhere.


Host: And can you tell us about some real life applications where this work can be applied to?


Shumian: Sure. There are a lot of critical applications that these NLOS techniques can be done. For example, for medical applications, we can use this kind of technique to do minimally invasive surgery. For example, for doctors who want to look inside your body, it might be possible just to shine light on your throat, and then the photons will travel through your body and come back. And then you measure it, it might give some information about what's inside your body.


And also in autonomous driving. Your car is driving one way, but it's critical to know what's happening on that side of the road and around the corner. It would be great if you could know in advance what is happening there.


And also in some disaster environments, there's a fire happening. This kind of technique can be used for searching and rescuing, like what's happening on the other side of the corridor if there is a fire blocking your view.


Host: So, they're pretty significant use cases. That’s very exciting work. What inspired you to work on such an interesting topic?


Shumian: Yes. Those exciting potential applications that I've mentioned, obviously this work is - and also I think it's interesting, because it's like magic at first. What's happening around the corner, all of us are curious about what's happening around the corner. But before this kind of technique - actually in ICCV 2009, Ramesh Raskar’s group from MIT, they did the first NLOS reconstruction to show us the potential of doing this kind of work, seeing around the corner. Currently the entire computational imaging field is just pushing this technique to another level. I want to be a part of that, so I joined this team to do that.


Host: It sounds like it's a great step towards the next level as well. And there have been many research studies using LiDAR to solve similar problems. So why did you pick a different method to address this issue?


Shumian: Actually, the method that we are using is not dramatically different from LiDAR, in a sense that LiDAR uses the first returning photon to estimate depth. But the way we do it is, we use some subsequent photons from the time-of-flight information that we collect, to do NLOS reconstruction. For example, we are directly looking at a wall, if you only use the first returning photo as the LiDAR does, we are only going to reconstruct that wall. But what's interesting is, around that corner, you have to use some subsequent photons that's coming indirectly from those subjects back to your sensor. You have to use those kind of information to do the reconstruction. Similar to LiDAR, they use only the time information to do depth estimation, because time multiplied by the velocity of light is the path length. For us, we also only use the time information, so we can directly reconstruct the shape of those objects.


Host: Professor, for you observing the progress on this research, how was that for you from your experience?


Ioannis: So it's pretty interesting to be doing research in non-line-of-sight imaging, because as Shumian mentioned, there is a pretty large part of the computational imaging community working on this problem. And there have already been a few amazing achievements in this area by other groups like Matthew Toole, and Gordon Wetzstein in Stanford and Andreas Velten in Wisconsin, and so on. So all of these provided us with a lot of inspiration about how to continue pushing forward in this problem. So it's been pretty exciting to watch over the years and see how much we can add to these with our own paper.


Host: And given what you know today, where would both of you or what would your perspective be on what the next stage of development needs to be?


Ioannis: I mean, the main problem in NOLS imaging is signal to noise ratio, so we're trying to measure some photons that bounce several times on the walls, go to other parts of the scene and come back to us. There are very few of those photons. We’re measuring 10 to 15 photons, that's about the level of signal we have. So a thing that we will need to really push forward in terms of increasing that signal in order to make all of these applications, as Shumian mentioned earlier, very practical. So I think that we are not in a pretty good place as far as if we can find the signal to reconstruct something out of it. So now we need to work on the first part, how do we enhance the signal, so we can try to use it in much more uncontrolled settings than what we're doing right now.


Host: Shumian, from your perspective, is there anything additional you'd like to add as well?


Shumian: I wish I can add to what Ioannis said.


Host: You're all good to go with that? That’s excellent. So and again, this question is for both of you. The paper has six authors from three Institutes. Can you tell us more around - certainly from your perspectives, Shumian - around the teamwork and the collaboration? How was that experience for you?


Shumian: Yeah, so Ioannis and Srinivasa are both my PhD advisors and the three of us, we have weekly meetings to discuss this. But actually, the initial idea of this work, the very initial thoughts about this kind of algorithm are from the discussion between Ioannis and Aswin. Aswin contributed a lot intellectually to this work. We also have very useful discussions with Kyros. Kyros also put a lot of effort into this. And they provided us with the initial hardware setups at the University of Toronto, and Sotiris did all of the tedious experimental work at first. Without Sotiris and Kyros and their hardware stuff, none of this could actually be applied to real results.


Host: When you work in such a collaborative environment, how do you communicate with each other on a daily basis? Like how does that teamwork come together and form as well?


Shumian: Mainly I will communicate with Ioannis daily, but Srinivasa - Joannis and I will have weekly meetings twice a week, and Srinivasa always drops by my office and says “What's happening?” “What's going on?” And I will explain to him what is happening today and what I plan to do next. And with Kyros, Aswin and Sotiris, we might discuss things that I'd have at high level and to see if this is the correct direction to go.


Host: Do you ever have differing opinions about the way forward?


Shumian: That’s right. That's how research is done. Different people have different opinions. And I will try out some of those ideas to see which things work and to verify all of those ideas.


Host: So it's a constant reiteration.


Shumian: Yes, exactly. I think that's how each paper is done.


Host: And Professor, from your perspective, then from the university’s point of view, how was the collaboration as well?


Ioannis: Yeah, it's been a pretty interesting collaboration. Kyros and Srinivasa are both very senior people in our field, and therefore, they always bring a lot of wisdom in. As Shumian mentioned, Aswin and I originally were discussing the first steps towards this idea. And we were working towards solving some mathematical problems, and then again Shumian said that we recently, thanks to Sotiris from the University of Toronto, we got the first measurement that showed that what the argument came up would actually work. So it was a pretty significant team effort.


Host: That sounds great. So where do you go next from here, Shumian? What are your next steps?


Shumian: Yeah, I really appreciate the award. It is a great encouragement. Currently, I'm a second year PhD student, so this motivates me to work harder and push my own boundaries even further. And I encourage everyone who is interested in computational imaging, this is an interesting and exciting field. And if you are interested in physics, optics, computer vision, it's really like a cross-section of everything, so working in this field is really exciting.


Host: That's awesome. And Professor, from your perspective as well, how does it feel to see Shumian get this achievement today, the award of the team as well?


Ioannis: Yeah, it's great. As Shumian said, it's nice to see computational imaging as a smaller part of the computer vision community right now be recognized. Hopefully this will encourage others to also work in this area. It's also great to see female students receive this award. And we have a lot of issues with diversity in STEM. So I hope that this can also help in that direction.


Host: Very encouraging, very inspiring for everyone out there. That's awesome. And I really love the applications that you talked about. They're very real life significant challenges in our world today. So I want to thank you both for joining us. It's been our pleasure to have you with us today. And congratulations again. It's such a great achievement.


Shumian: Thank you.

Ioannis: Thank you very much.

For more videos on CVPR 2019, check out our collection.

Robin.ly CVPR 2019 talks - Crossminds.ai


Subscribe to CrossMinds.ai Newsletter for weekly updates on tech research, industry & career !

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form
Tags:
Robin.ly

Robin.ly is a content platform dedicated to helping engineers and researchers develop leadership, entrepreneurship, and AI insights to scale their impacts in the new tech era.