12/18/2023 0 Comments Photo sense and non senseFor example, if you have an autonomously driving car that uses a trained machine-learning method for recognizing stop signs, you could test that method by identifying the smallest input subset that constitutes a stop sign. To that end, it could also be possible to use these methods as a type of validation criteria. To find the rationale for the model's prediction on a particular input, the methods in the present study start with the full image and repeatedly ask, what can I remove from this image? Essentially, it keeps covering up the image, until you’re left with the smallest piece that still makes a confident decision. Then, when image classifiers are trained on datasets such as ImageNet, they can make seemingly reliable predictions based on those signals.Īlthough these nonsensical signals can lead to model fragility in the real world, the signals are actually valid in the datasets, meaning overinterpretation can’t be diagnosed using typical evaluation methods based on that accuracy. Image classification is hard, because machine-learning models have the ability to latch onto these nonsensical subtle signals. The tech in discussion works by processing individual pixels from tons of pre-labeled images for the network to “learn.” In addition to medical diagnosis and boosting autonomous vehicle technology, there are use cases in security, gaming, and even an app that tells you if something is or isn’t a hot dog, because sometimes we need reassurance. We found that these images were meaningless to humans, yet models can still classify them with high confidence,” says Brandon Carter, MIT Computer Science and Artificial Intelligence Laboratory PhD student and lead author on a paper about the research.ĭeep-image classifiers are widely used. Not only are these high-confidence images unrecognizable, but they contain less than 10 percent of the original image in unimportant areas, such as borders. “Overinterpretation is a dataset problem that's caused by these nonsensical signals in datasets. Models trained on CIFAR-10, for example, made confident predictions even when 95 percent of input images were missing, and the remainder is senseless to humans. The team found that neural networks trained on popular datasets like CIFAR-10 and ImageNet suffered from overinterpretation. The network used specific backgrounds, edges, or particular patterns of the sky to classify traffic lights and street signs - irrespective of what else was in the image. Autonomous vehicles in particular rely heavily on systems that can accurately understand surroundings and then make quick, safe decisions. This could be particularly worrisome for high-stakes environments, like split-second decisions for self-driving cars, and medical diagnostics for diseases that need more immediate attention. But a new, more subtle type of failure recently identified by MIT scientists is another cause for concern: “overinterpretation,” where algorithms make confident predictions based on details that don’t make sense to humans, like random patterns or image borders. If a model was trying to classify an image of said puzzle, for example, it could encounter well-known, but annoying adversarial attacks, or even more run-of-the-mill data or processing issues. Sure, we can program them to learn, but making sense of a machine’s decision-making process remains much like a fancy puzzle with a dizzying, complex pattern where plenty of integral pieces have yet to be fitted. For all that neural networks can accomplish, we still don’t really understand how they operate.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |