The tools, known as Sensory Substitution Devices (SSDs), take information from one sense and present it in another.
For example, they enable blind people to "see" by using other senses such as touching or hearing.
By using a smartphone or webcam to translate a visual image into a distinct soundscape, SSDs enable blind users to create a mental image of objects, such as their physical dimensions and colour.
"These devices can help the blind in their everyday life," explained professor Amir Amedi from Hebrew University's Amedi Lab for Brain and Multisensory Research.
With intense training, blind users can even "read" letters by identifying their distinct soundscape.
"These devices also open unique research opportunities by letting us see what happens in brain regions normally associated with one sense, when the relevant information comes from another," he added.
For the study, the researchers used functional MRI imaging (fMRI) to study the brains of blind subjects in real-time while they used an SSD to identify objects by their sound.
They found that when it comes to recognising letters, body postures and more, specialised brain areas are activated by the task at hand, rather than by the sense (vision or hearing) being used.
"This means that the main criteria for a reading area to develop is not the letters' visual symbols but rather the area's connectivity to the brain's language-processing centres," Amedi noted.
Similarly a number area will develop in a region which already has connections to quantity-processing regions, he added.
The findings suggest that unexpected brain connectivity can lead to fast brain specialisation, allowing humans to adapt to the rapid technological and cultural innovation of our generation.
"If we take this one step further, this connectivity-based mechanism might explain how brain areas could have developed so quickly on an evolutionary timescale," the authors concluded.