Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:It won't go anywhere (Score 2) 63

If you change completely your environment, it will stop working, at least with the algorithm we have used at the moment. We have tried to move some objects (like the white box you can see in the figures of the article) from the back of the image, and the algorithm still works, but changing the whole room will certainly break the imaging.

Comment Re:It won't go anywhere (Score 2) 63

Note that we do not store any data, we train an AI algortihm that learns from examples, which is different to compare new data to stored data. As many technologies, one always look for a compromise. It won't be possible even in the near future to have a 3D sensor with megapixel resolution that works at 1000fps. But for many applications you don't even need that. If you have a static scenario, for instance a fixed location in a public spacfe where you want to obtain images or tell how many people are there, without recording their faces or any personal information about them that could identify them, this approach can be an option. If you need to have some form of 3D sensing, even if not very accurate, but that works at 1000fps (for instance), then this could be useful. If you want very high resolution, then not.

Comment Re:It is not LIDAR, but not entirely new either (Score 1) 63

You're right that LiDAR shines a laser at different directiosn to create an image. Here we don't collect first the background 3D data; we shoot light pusles and collect them with the single point while another 3D sensor collects the same scene. With this data we train an AI algorithm and after it is trained, we remove the 3D sensor and only collect with the single point. The data from the single point + AI algorithm is what creates the 3D image. The TED presentation you shared is stunning! However, they do use a camera, this means that they had an array of pixels from where they could extract spatial information in the XY plane, while the Z information is taken from the time of flight of photons arriving to every pixel. I hope it is a bit clearer now :)

Comment Re:On the AI reconstruction (Score 4, Interesting) 63

As a matter of fact, different objects give different signals, even if they are at the same distance. The peak position of the signal might be at the same place, but the signal itself is different. Even the signals for two different people at the exact same place are different: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.nature.com%2Farticle... One of the keys is that the scene has moving objects and a fixed background, and we exploit that to recognize where objects are and shape do they have. But you're correct by saying that a lot of information comes from what the algorithm has seen before, otherwise the problem would be not possible to be solved.

Comment Re:single point and ai trained.. but.. (Score 5, Interesting) 63

You got it perfectly! The system can tell the orientation of the "banana" (as you say) because it uses not only information about the banana itself but about all objects appearing in the scene. This allos breaking the symmetry you comment. Having a single sensor and no scanning parts makes that the system works orders of magnitude faster than a LiDAR (that scans the scene). We used only one to demonstrate that that's enough to form an image. We don't claim that this is better than LiDAR in terms of resolution, but creating an image using only time information is a conceptual change on what the requirements for imaging are. This means that now you can use any pulsed source, like a wifi antenna, to create an image, for instance. Of course, adding more sensors will improve the image, imagine a camera where every pixel has this capability!

Slashdot Top Deals

Torque is cheap.

Working...