A processing device is configured to interface with a region of the brain of a subject that is responsible for forming concepts without sensory input. The processing device receives brain signals representative of at least one concept formed by the region of the brain without sensory input, and processes the received brain signals so as to convert the at least one concept to data that is representative of a tangible form of the at least one concept. In certain embodiments, the processing device processes data that is representative of at least one concept to be formed by the region so as to convert the data into one or more brain signals, and selectively providing the one or more brain signals to the region of the brain such that the at least one concept represented by the data is formed by the region of the brain without sensory input.
|
1. A method for use with an animal subject having a brain that includes a region that is responsible for forming concepts without sensory input, the method comprising:
interfacing a processing device with the region of the brain;
receiving, by the processing device, nerve impulses that are transmitted along a transmission path to or through the region of the brain and that are representative of at least one concept formed by the region of the brain without sensory input; and
processing, by the processing device, the received nerve impulses so as to convert the at least one concept to computer readable data that is representative of a tangible form of the at least one concept, wherein the processing includes applying to the received nerve impulses at least one mapping that maps between nerve impulses and computer readable data, wherein the at least one mapping is a one-to-one mapping such that each nerve impulse is mapped to a corresponding data element of computer readable data.
12. A system for use with an animal subject having a brain that includes a region that is responsible for forming concepts without sensory input, the system comprising:
a processing device interfaced with the region of the brain and configured to:
receive nerve impulses that are transmitted along a transmission path to or through the region of the brain and that are representative of at least one concept formed by the region of the brain without sensory input, and
process the received nerve impulses to convert the at least one concept to computer readable data that is representative of a tangible form of the at least one concept, wherein the processing device is configured to process the received nerve impulses by applying to the received nerve impulses at least one mapping that maps between nerve impulses and computer readable data, wherein the at least one mapping is a one-to-one mapping such that each nerve impulse is mapped to a corresponding data element of computer readable data.
22. A method for use with an animal subject having a brain that includes a region that is responsible for forming concepts without sensory input, the method comprising:
interfacing a processing device with the region of the brain;
processing, by the processing device, computer readable data that is representative of at least one concept to be formed by the region so as to convert the computer readable data into one or more nerve impulses that are transmittable along a transmission path to or through the region of the brain, wherein the processing includes applying to the computer readable data at least one mapping that maps between nerve impulses and computer readable data, wherein the at least one mapping is a one-to-one mapping such that each data element of the computer readable data is mapped to a corresponding nerve impulse; and
selectively providing the one or more nerve impulses to the region of the brain via the transmission path such that the at least one concept represented by the computer readable data is formed by the region of the brain without sensory input.
24. A system for use with an animal subject having a brain that includes a region that is responsible for forming concepts without sensory input, the system comprising:
a processing device interfaced with the region of the brain and configured to:
receive computer readable data that is representative of at least one concept to be formed by the region,
process the received computer readable data so as to convert the computer readable data into one or more nerve impulses that are transmittable along a transmission path to or through the region of the brain, wherein the processing device is configured to process the received computer readable data by applying to the received computer readable data at least one mapping that maps between nerve impulses and computer readable data, wherein the at least one mapping is a one-to-one mapping such that each data element of the computer readable data is mapped to a corresponding nerve impulse, and
selectively provide the one or more nerve impulses to the region of the brain via the transmission path such that the at least one concept represented by the computer readable data is formed by the region of the brain without sensory input.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
modifying the computer readable data to produce modified data;
converting the modified data into one or more nerve impulses; and
providing the one or more nerve impulses brain signals to the region of the brain such that a concept represented by the one or more nerve impulses is formed by the region of the brain without sensory input.
9. The method of
10. The method of
11. The method of
13. The system of
14. The system of
16. The system of
17. The system of
18. The system of
modify the computer readable data to produce modified data;
convert the modified data into one or more nerve impulses; and
provide the one or more nerve impulses to the region of the brain such that a concept represented by the one or more nerve impulses is formed by the region of the brain without sensory input.
19. The system of
20. The system of
21. The system of
23. The method of
|
This application claims priority from U.S. Provisional Patent Application No. 63/226,821, filed Jul. 29, 2021, whose disclosure is incorporated by reference in its entirety herein. This application is also related to commonly owned U.S. Pat. No. 11,395,620 and its corresponding International Application No. PCT/IB2022/054777, as well as commonly owned U.S. patent application Ser. No. 17/728,013 and its corresponding International Application No. PCT/IB2022/055761, the disclosures of which are incorporated by reference in their entireties herein.
The present invention relates to the formation of concepts by the brain, and more particularly methods and systems for rendering and/or injecting such concepts.
Imagination is the ability to form concepts, including objects and sensations, in the mind without any immediate input from the senses. These concepts (objects and sensations) can be in the form of, for example, mental images, phonological passages (i.e., non-acoustic sounds), analogies, and narratives.
According to the teachings of an embodiment of the present invention, there is provided a method for use with an animal subject having a brain that includes a region that is responsible for forming concepts without sensory input. The method comprises: interfacing a processing device with the region of the brain; receiving, by the processing device, brain signals representative of at least one concept formed by the region of the brain without sensory input; and processing, by the processing device, the received brain signals so as to convert the at least one concept to data that is representative of a tangible form of the at least one concept.
Optionally, the interfacing includes: implanting at least a portion of a machine-subject interface in the subject in association with the region of the brain so as to provide a communicative coupling between the processing device and the region of the brain.
Optionally, the interfacing includes: implanting the processing device in the subject.
Optionally, the interfacing includes: deploying the processing device externally to the brain.
Optionally, the processing device receives the brain signals representative of the at least one concept via a non-invasive technique.
Optionally, the method further comprises: performing at least one operation on the data according to one or more rules.
Optionally, the at least one operation includes one or more of: i) storing the data in computerized storage device communicatively coupled with the processing device, ii) sending the data to a computerized server system communicatively coupled with the processing device via one or more communication networks, or iii) modifying the data to produce modified data.
Optionally, the method further comprises: modifying the data to produce modified data; converting the modified data into one or more brain signals; and providing the one or more brain signals to the region of the brain such that a concept represented by the one or more brain signals is formed by the region of the brain without sensory input.
Optionally, the at least one concept is a mental image.
Optionally, the at least one concept is a non-acoustic sound.
Optionally, the tangible form includes an image that is visible to the subject or another viewer.
Optionally, the tangible form includes a sound that is perceptible to the subject or another listener.
Optionally, the brain signals include nerve impulses transmitted along a transmission path to or through the region of the brain.
Optionally, the processing the received brain signals includes applying to the received brain signals at least one mapping that maps between brain signals and data.
There is also provided according to an embodiment of the teachings of the present invention a system for use with an animal subject having a brain that includes a region that is responsible for forming concepts without sensory input. The system comprises: a processing device interfaced with the region of the brain and configured to: receiving brain signals representative of at least one concept formed by the region of the brain without sensory input, and processing the received brain signals to convert the at least one concept to data that is representative of a tangible form of the at least one concept.
Optionally, the system further comprises: a machine-subject interface for interfacing the processing device with the region of the brain.
Optionally, at least a portion of the machine-subject interface is configured to be implanted in the subject in association with the region of the brain so as to provide a communicative coupling between the processing device and the region of the brain.
Optionally, the processing device is external to the brain.
Optionally, the processing device receives brain signals representative of the at least one concept via a non-invasive technique.
Optionally, the processing device is further configured to send the data to one or more of: i) a computerized storage device communicatively coupled with the processing device, and ii) a computerized server system communicatively coupled with the processing device via one or more communication networks.
Optionally, the processing is further configured to: modify the data to produce modified data, convert the modified data into one or more brain signals, and provide the one or more brain signals to the region of the brain such that a concept represented by the one or more brain signals is formed by the region of the brain without sensory input.
Optionally, the at least one concept is a mental image.
Optionally, the at least one concept is a non-acoustic sound.
Optionally, the tangible form includes an image that is visible to the subject or another viewer.
Optionally, the tangible form includes a sound that is perceptible to the subject or another listener.
Optionally, the brain signals include nerve impulses transmitted along a transmission path to or through the region of the brain.
Optionally, the processing device is configured to process the received brain signals by applying to the received brain signals at least one mapping that maps between brain signals and data.
There is also provided according to an embodiment of the teachings of the present invention a method for use with an animal subject having a brain that includes a region that is responsible for forming concepts without sensory input. The method comprises: interfacing a processing device with the region of the brain; processing, by the processing device, data that is representative of at least one concept to be formed by the region so as to convert the data into one or more brain signals; and selectively providing the one or more brain signals to the region of the brain such that the at least one concept represented by the data is formed by the region of the brain without sensory input.
Optionally, the method further comprises: providing the data to the processing device prior to processing the data by the processing device, wherein the data is provided to the processing device by at least one of: i) a memory device communicatively coupled with the processing device, ii) a computerized server system communicatively coupled with the processing device via one or more communication networks, or iii) a data capture device associated with the processing device.
Optionally, the processing the data includes applying to the data at least one mapping that maps between brain signals and data.
There is also provided according to an embodiment of the teachings of the present invention a system for use with an animal subject having a brain that includes a region that is responsible for forming concepts without sensory input. The system comprises: a processing device interfaced with the region of the brain and configured to: receive data that is representative of at least one concept to be formed by the region, process the received data so as to convert the data into one or more brain signals, and selectively provide the one or more brain signals to the region of the brain such that the at least one concept represented by the data is formed by the region of the brain without sensory input.
Optionally, the processing device is configured to process the received data by applying to the received data at least one mapping that maps between brain signals and data.
Unless otherwise defined herein, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein may be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
Some embodiments of the present invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
Attention is now directed to the drawings, where like reference numerals or characters indicate corresponding or like components. In the drawings:
Embodiments of the present invention provide methods and systems for receiving, by a processing device that is interfaced with a region of an animal subject's brain that is responsible for forming concepts without immediate input from the senses of the subject (i.e., a region that is responsible for imagination), brain signals that are representative of a formed concept or formed concepts, and then processing by the processing device the received brain signals so as to convert the concept or concepts to data that is representative of a tangible form of the concept or concepts. This process of receiving brain signals and converting the brain signals to data is referred to interchangeably herein as “imagination rendering”, “non-sensory information rendering”, or “concept rendering”.
In preferred embodiments, the processing device is also configured for processing data, that is representative of a concept or concepts that is or are to be formed by the region of the brain, so as to convert the data into brain signals, and selectively providing the brain signals to the region of the brain such that the concept or concepts that is or are represented by the data is or are formed by the region of the brain without immediate input from the senses of the subject. This process of converting data to brain signals and providing the brain signals to the brain is referred to interchangeably herein as “imagination injection”, “imagination inducement”, “concept injection”, or “concept inducement”.
A concept that is formed by the region of the brain without immediate input from the senses that is to be converted to data is referred to interchangeably herein as “a concept that can be rendered”, “a render-able concept”, “imagination information that can be rendered”, “render-able imagination information”, “non-sensory information that can be rendered”, or “non-sensory render-able information”. A concept that is converted to data is referred to interchangeably herein as “rendered imagination information”, “rendered non-sensory information”, or “a rendered concept”.
A concept that is represented by data that is to be converted to brain signals and provided to the region such that the region forms the concept without immediate input from the senses is referred to interchangeably herein as “a concept that can be injected or induced”, “an injectable or inducible concept”, “imagination information that can be injected or induced”, “injectable or inducible imagination information”, “non-sensory information that can be injected or induced”, or “injectable or inducible non-sensory information”. Injectable (or inducible) imagination information may also refer to the data that is representative of an injectable/inducible concept. A concept, represented by data that has been converted to brain signals and provided to the region of the brain such that the concept is formed by the region, is referred to interchangeably herein as “injected or induced imagination information”, “an injected or induced concept”, or “injected or induced non-sensory information”.
As will be discussed in detail below, in certain embodiments the imagination information that can be rendered or injected using the methods and system according to embodiments of the present invention can be visual information (e.g., an image). In other words, in certain embodiments, the concepts can be visual imagination information, also referred to as “imagination images”, “imagined images”, or “mental images”. In other embodiments, the imagination information (i.e., concepts) that can be rendered or injected using the methods and system according to embodiments of the present invention can be audio information (e.g., sound). In other words, in certain embodiments, the concepts can be audio imagination information, also referred to as “sound imagination information”, “imagination sounds”, “imagined sounds”, “non-acoustic sounds”, or “mental sounds”.
Within the context of this document, in certain cases “imagination” also includes “dreams” and/or “thoughts”.
The principles and operation of the methods and systems according to present invention may be better understood with reference to the drawings accompanying the description.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
Referring now to the drawings,
In certain non-limiting embodiments, the region 103 of the brain 102 with which the processing device 12 is interfaced is a region that forms visual concepts (i.e., “mental images”). In such embodiments, the imagination information that can be rendered and injected is visual imagination information. In other non-limiting embodiments, the region 103 of the brain 102 with which the processing device 12 is interfaced is a region that is responsible for forming sound concepts (phonological passages) without sensory input (i.e., forming non-acoustic sounds or imagined sounds). In such embodiments, the imagination information that can be rendered and injected is audio/sound imagination information.
It is noted that in the non-limiting illustrative embodiment depicted in
In preferred embodiments, the processing device 12 is operative to receive brain signals that are representative of at least one concept formed by the region 103 of the brain without sensory input (i.e., without immediate input from the senses of the subject 100). In other words, the processing device 12 is operative to receive brain signals that carry imagination information (e.g., visual imagination information or sound imagination information) and as such the brain signals are imagination information bearing signals. These brain signals can be any type of information bearing signal that can be read, picked-up, or captured by a computer or a machine or a machine-subject interface. In one particular non-limiting example, the brain signals are in the form of nerve impulses that propagate along a nerve pathway that connects or provides input to the region 103 of the brain 102.
The process of receiving brain signals by the processing device 12 is generally referred to herein as “collecting brain signals” or “collection of brain signals”.
The processing device 12 is further operative to process the received brain signals (collected brain signals) so as to generate (produce) data (preferably digital data) from the collected brain signals. In particular, the processing device 12 is operative to process the collected brain signals to convert the render-able imagination information that is carried by the brain signals to data that is representative of a tangible form of the imagination information, thereby rendering the render-able imagination information to tangible form. The data is preferably machine-readable data and/or computer readable data. In other words, the data is preferably in a form or format such that the data can be provided to a suitable machine or computer (processor) and read (for further processing or output) by that machine or computer.
In certain embodiments, the received signals that are processed by the processing device 12 can be nerve impulses, or can be representative signals which are produced (i.e., generated) in response to measurement or sampling of brain activity in the region 103 by some type of microdevice, for example a microdevice that has microelectrodes or microtransducers, associated with the processing device 12.
The processing device 12 processes the brain signals (collected brain signals) by applying a mapping function or functions (that contain mapping data) to the brain signals. The mapping function maps between imagination information bearing brain signals and data, i.e., provides a transformation from brain signals to data and vice versa, such that the imagination information (i.e., concept) that is represented by the received brain signals is converted (transformed) to data, thereby rendering render-able imagination information to a tangible form (as a result of the application of the mapping function by the processing device 12). This brain signal to data mapping function is referred to hereinafter interchangeably as an “imagination-data mapping”. The imagination-data mapping is preferably a one-to-one mapping. The mapping is preferably such that the data that is converted from the brain signals using the imagination-data mapping function(s) provides a faithful representation of the concept formed by the region 103. In other words, the mapping is preferably such that the data provides a faithful representation of the image or sound that is imagined by the subject 100.
As mentioned, according to certain embodiments, the concept that is formed by the region 103 is a visual concept (i.e., mental image, imagined image), i.e., the processing device 12 converts brain signals that are representative of a mental image. In such embodiments, in one example, the data that is converted from the brain signals can be image data or pixel data whereby the tangible form is a visible image corresponding to the image/pixel data that is a faithful representation of the mental image, which can be viewed by the subject 100 or another viewer.
In other embodiments, the concept that is formed by the region 103 is a sound concept (i.e., imagined sound, non-acoustic sound), i.e., the processing device 12 converts brain signals representative of an imagined sound. In such embodiments, in one example, the data can be sound data in the form of an audio signal or signals, which can be analog or digital (e.g., bits or bytes representing one or more sounds or tones), whereby the tangible form is an acoustic sound corresponding to the audio signal that is a faithful representation of the imagined sound (non-acoustic sound) that can be audibly heard (i.e., that is audibly perceptible) by the subject 100 or another listener.
With continued reference to
The storage/memory 16 can be any conventional storage media or an application specific or special purpose storage media for storing data or information, which although shown as a single component for representative purposes, may be multiple components. The storage/memory 16 can be implemented in various ways, including, for example, one or more volatile or non-volatile memory, a flash memory, a read-only memory, a random-access memory, and the like, or any combination thereof. In certain embodiments, the storage/memory 16 can include one or more components for storing and maintaining the imagination-data mapping, and at least one component configured to store machine executable instructions that can be executed by the one or more processors 16.
In certain embodiments, the processing device 12 is further operative to perform at least one operation on the generated data (generated by processing brain signals via application of the imagination-data mapping) in accordance with one or more rules or handling criteria. For example, the processing device 12 can be configured to operate on the generated data according to a set of data storage rules or criteria, such that the processing device 12 sends some or all of data representative of the tangible form of the at least one concept to one or more computerized storage/memory devices associated with the processing device 12. Such associated storage/memory devices can include, for example, the storage/memory 16 of the processing device 12 (
With additional reference to
In another non-limiting example, the processing device 12 can be configured to operate on the generated data according to a set of data modification or manipulation rules or criteria to produce modified data. In general, the modified data can be produced from the generated data by one or more of adding data to the generated data (e.g., appending data to or inserting data in the generated data), deleting data from the generated data (e.g., removing subsets of data from the generated data), or changing data elements of the generated data (e.g., changing data values). In embodiments in which the generated data includes pixel data that corresponds to an image that represents visual imagination information, the processing device 12 can modify the data by changing one or more of the pixels to change the image and/or by combining the pixel data with other pixel data representative of another image, which can be a computer-generated image or an image of a real-world scene captured by an image captured device (e.g., camera). The other pixel data can be provided to the processing device 12 via, for example, an image capture device or a memory linked, connected, or otherwise associated with the processing device 12, e.g., the storage/memory 16, external storage/memory 32, server system 34. In embodiments in which the generated data includes an audio signal or signals that corresponds to sound or sounds that represent(s) audio/sound imagination information, the processing device 12 can modify the generated data by, for example, adding additional sound(s) (either from a sound capture device (e.g., microphone) or from a memory associated with the processing device 12, e.g., the storage/memory 16, external storage/memory 32, server system 34), and/or changing or deleting data elements (e.g., bits) of a digital version of the generated audio signal(s), and/or adjusting audio parameters of the audio signal(s), including, for example, volume, pitch, tones, etc. For example, the processing device 12 can modify the audio signal to increases or decrease the volume associated with the audio signal. As another example, the processing device 12 can modify the audio signal to change one or more frequencies (tones) of the sound. As an additional example, the processing device 12 can modify the generated audio signal by performing noise cancellation or interference reduction signal processing on the generated audio signal.
In certain embodiments, the processing device 12 can also convert the modified data to a new set of one or more brain signals (by applying the imagination-data mapping to the modified data), and then provide those signals to the region 103 of the brain, such that a concept represented by the new set of brain signals is formed by the region 103 without sensory input. In other words, the modified image or sound can then be “imagined” by the subject 100.
The modified data can also be stored in in a memory (e.g., storage/memory 16 and/or external storage/memory 32 and/or server system 34).
In another non-limiting example, the processing device 12 can be configured to operate on the generated data according to a set of output rules or criteria. In embodiments in which the generated data includes pixel data that corresponds to an image that represents visual imagination information, the output rules or criteria can include a set of display rules or criteria. For example, the processing device 12 can be configured to provide the generated data to a display device connected or linked to the processing device 12 such that the display device displays one or more images (or video) represented by the pixel data. The processing device 12 can transmit or send the data to such a display device using any suitable image/video transmission format or standard, or any commonly used standards for data transmission, including any of the formats and standards discussed above. As another example, the processing device 12 can provide the generated data for projection or holographic display.
In embodiments in which the generated data includes an audio signal that corresponds to sound(s) that represents audio/sound imagination information, the output rules or criteria can include a set of playback rules or criteria. For example, the processing device 12 can be configured to provide the generated data (audio signal or signals) in digital form to a digital audio playback device (e.g., MP3, digital stereo, etc.) connected or linked to the processing device 12 such that the audio playback device audibly plays sound represented by the generated data. The processing device 12 can transmit or send the data to such an audio playback device using any suitable audio transmission format or standard, or any commonly used standards for data transmission, including any of the formats and standards discussed above. Alternatively, the processing device 12 can be configured to provide the generated data in analog form to an analog audio playback device.
In certain preferred embodiments, the processing device 12 is operative to process obtained/received data (preferably digital data), that is representative of information corresponding to an injectable concept that is to be formed by the region 103 of the brain without sensory input, so as to convert the data into one or more brain signals. The processing device 12 is further operative to selectively provide or transmit the one or more brain signals to the region 103 such that the injectable concept represented by the obtained/received data is formed as a concept by the region 103 without sensory input. In other words, the processing device 12 can obtain or receive data, for example in form of an image represented by pixel data or an analog or digital audio signal that conveys sound, and convert that data to brain signals, and then provide those brain signals to the region 103 of the brain such that a concept of the image or sound is formed by the region 103, i.e., such that the subject 100 imagines the image or the sound. Thus, the system 10 according to certain aspects of the present invention is operative to convert data, that is representative of a tangible form of an injectable concept/injectable imagination information, to brain signals that are provided to the subject such that the concept is in intangible form. In certain embodiments, the processing device 12 can manipulate (i.e., modify) the received/obtained data prior to converting the data to brain signals. The manipulation/modification of the data can be, for example, according to the set of data modification or manipulation rules or criteria discussed above (or a different set of modification or manipulation rules or criteria).
In certain embodiments, the processing device 12 is configured to transmit the brain signals to the region 103 using nerves or nerve fibers at the input to the region 103. For example, the processing device 12 may provide (transmit) the brain signals by inducing nerve transmission of nerve impulses. For example, the processing device 12 may provide the brain signals by sending the brain signals to a microdevice, for example one or more microelectrodes or microtransducers that induces transmission of nerve impulses corresponding to the brain signals.
The data that is obtained or received by the processing device 12, and that is to be converted to brain signals by the processing device 12, can be of various types and/or be from various sources. For example, the data that is to be converted can be image data or sound data, and can be provided to the processing device 12 from a memory associated with the processing device 12 (e.g., the storage/memory 16, external storage/memory 32, server system 34) or any other data source or storage medium. As another example, image data can be provided to the processing device 12 by an image capture device communicatively coupled to the processing device 12. As yet another example, sound data (e.g., audio signals) can be provided to the processing device 12 by a sound capture device communicatively coupled to the processing device 12. In a further example, the data that is to be converted to brain signals by the processing device 12 can be the data that was generated by the processing device 12 from collected brain signals, or modified data resultant from modification of data that was generated by the processing device 12 from collected brain signals.
The conversion of the received data to brain signals is effectuated by applying the imagination-data mapping discussed above. Since the brain of each subject may form concepts differently, the mapping for each subject may be a subject-specific mapping (i.e., the mapping for one subject may be different from the mapping for another subject). However, regardless of the specificity of a given imagination-data mapping, the mapping is preferably such that the brain signals converted from data using the imagination-data mapping function(s) creates a faithful representation of the true image or sound (represented by the data) in the mind of the subject 100.
Various example methods for generating imagination-data mapping functions will be described in detail in subsequent sections of the present disclosure.
The mapping function or functions can be stored in a memory device associated with the processing device 12, as will be discussed further below. In certain embodiments, the mapping function(s) can be stored as a data item or data structure, for example in the form of a data table that stores mapping parameters and configurations. In other embodiments, the mapping function(s) can be stored as an equation or a set of equations that provide a functional relationship between data and brain signals. In yet further embodiments, the mapping function(s) can be expressed as computer-readable code that executes a mapping algorithm, which can be applied to brain signals and/or received data. The aforementioned formats are exemplary only, and other formats of mapping functions are contemplated herein.
With continued reference to
Various deployment configurations for achieving communicative coupling of the processing device 12 to the region 103 are contemplated herein, and several non-limiting example deployment configurations will be described in further detail below. Some of the deployment configurations described herein require some type of implantation in the subject 100, which can be accomplished using invasive or semi-invasive techniques. For example, invasive techniques can include implantation by surgically accessing the region 103 through the subject's skull (i.e., surgically opening the skull). Surgeries performed on the brain have become common over the years, and it is asserted that a trained human surgeon and/or a robotic surgeon (such as used by the Neuralink Corporation of San Francisco, USA) can perform the necessary implantation. Before describing several deployment configurations, it is noted that the deployment configurations described herein are exemplary only and represent only a non-exhaustive subset of possible deployment options for the processing device 12. Other deployment options may be possible, as will be apparent to those of skill in the art.
In one example deployment configuration according to certain non-limiting embodiments, the processing device 12 communicates with the region 103 by tapping the region 103, for example by connecting the processing device 12 to a transmission route/path that is an input to the region 103. A nerve, nerve bundle, or nerve path is one example of such a transmission route/path. In such a deployment configuration, the processing device 12 preferably remains external to the brain 102 of the subject 100, and most preferably external to the skull so as to be at least partially visible when viewing the subject's head. When the processing device 12 is external to the subject 100, the subject interfacing portion 18b is implanted at or on the segment of the transmission route (that is an input to the region 103) together with either the entirety of the linking portion 20, or a segment of the linking portion 20 that connects to the subject interfacing portion 18b. If only the segment of the linking portion 20 that connects to the subject interfacing portion 18b is implanted, the remaining segment of the linking portion 20, which connects to the electronics interfacing portion 18a, is external to the subject 100.
In another example deployment configuration, the processing device 12 is deployed external to the subject 100, and the subject interfacing portion 18b is implanted at or on the region 103 together with either the entirety of the linking portion 20 or a segment of the linking portion 20 that connects to the subject interfacing portion 18b. If only the segment of the linking portion 20 that connects to the subject interfacing portion 18b is implanted, the remaining segment of the linking portion 20, which connects to the electronics interfacing portion 18a, is external to the subject 100. Such an example deployment configuration is schematically illustrated in
In yet another example deployment configuration according to certain non-limiting embodiments, the processing device 12 itself, together with the entirety of the interface 18, can be implanted at or on the region 103. In another example deployment configuration according to non-limiting embodiments, the processing device 12 is implanted at or on a segment of the transmission route (that is an input to the region 103).
Non-invasive deployment configurations are also contemplated herein. For example, the interface 18 can be provided by way of an optical magnetic field sensor arrangement or a non-contact modulation arrangement employing, for example, optic, magnetic, or ultrasound techniques. In such configurations, the interface 18 (and its related components) as well the processing device 12 are completely external to the brain 102. The external interface 18 picks up brain signals at the region 103 via non-contact or non-invasive contact means, and provides those picked up brain signals to the processing device 12.
With continued reference to
Bearing the above in mind, in the non-limiting example embodiment illustrated in
In one example deployment configuration of the processing device 12 according to the embodiment illustrated in
In another example deployment configuration of the processing device 12 according to the embodiment illustrated in
In embodiments in which the region 103 is the input to the occipital lobe 106 from the parieto-occipital sulcus 109, it is noted that different types of information arriving at the occipital lobe 106 have a common format. Since the information that is input to the occipital lobe 106 have a common format, and since the occipital lobe 106 is in the visual processing center of the brain, in certain embodiments the format of the imagined images (i.e., visual imagination information) arriving from the parietal lobe 104 is of the same format as visual real information. Commonly owned U.S. Pat. No. 11,395,620, which is incorporated by reference in its entirety herein, describes techniques for receiving signals (corresponding to nerve impulses) from the visual cortex (or an equivalent visual processing region of a brain) in response to a subject viewing a scene, and processing those received signals (using a mapping function, referred to as an “impulse-image mapping”) to generate image data that is representative of the visual perception of the scene by the subject. U.S. Pat. No. 11,395,620 additionally describes techniques for using the mapping function to process image data that corresponds to an image or images to convert the image data to nerve impulses, and providing those nerve impulses to the visual cortex (or equivalent visual processing region) such that the subject visually perceives the image or images corresponding to the image data. Since in certain embodiments the format of the imagined images (i.e., visual imagination information) arriving from the parietal lobe 104 is a as visual real information, it is a particular feature of certain aspects according to embodiments of the present invention to leverage the mapping methodologies (for converting nerve impulse to image data and vice versa) described in commonly owned U.S. Pat. No. 11,395,620 in order to enable conversion between brain signals/imagination information and image data. In particular, the mapping function(s) described in U.S. Pat. No. 11,395,620 can be used as the imagination-data mapping according to embodiments of the present invention in which the imagination information is visual imagination information (i.e., the concept is a mental image). In other words, in certain embodiments the mapping function(s) described in U.S. Pat. No. 11,395,620 are suitable for converting brain signals that represent a visual concept (i.e., a mental image or an imagined image) to image data, and vice versa.
The following paragraphs describe various methods and techniques for generating the impulse-image mapping, which can be used as the imagination-data mapping in the context of visual imagination information. Further discussion of the impulse-image mapping can be found in U.S. Pat. No. 11,395,620.
According to certain embodiments, generation of the impulse-image mapping (i.e., the imagination-data mapping) can be aided by machine learning (ML) or neural networks (NN) algorithms. For example, the processing device 12 can employ one or more ML or NN algorithms to learn the signal format of nerve impulses (in response to the eyes of the subject viewing a scene or being provided with visual stimuli), and determine the imagination-data mapping by comparing the nerve impulse format to digital images stored in a memory associated with the processing device 12. In certain embodiments, the stored digital images can be generated by an imaging device, such as a camera, associated with the processing device 12.
As part of one non-limiting example process for generating the mapping, a sample picture (i.e., image) can be positioned in front of the eyes of the subject 100 as a visual stimulus such that the light from the sample is collected (captured) by the eyes and the processing device 12 collects the nerve impulses sent from the eyes to the brain 102 (along the optic nerves, which is a nerve or nerves that transmit nerve impulse from the eyes to the visual cortex or equivalent visual processing region of a brain) in response to the subject viewing the sample. A digital image having image data representative of the same sample can also be stored in a memory associated with the processing device 12 (e.g., storage/memory 16). The digital image can be generated, for example, by an imaging device. The resolution of the digital image is preferably in accordance with a standard resolution, such as, for example, 1920 pixels by 1080 pixels, 1280 pixels by 960 pixels, 800 pixels by 600 pixels, etc. Subsequently, a small change can be made to the sample image, for example by changing a single pixel of the sample image, to produce a new sample image. The new sample image is then placed in front of the eyes of the subject 100, and the processing device 12 collects the nerve impulses sent from the eyes to the brain 102 in response to viewing the new sample image. A digital version of the new sample image, i.e., a digital image having digital image data representative of the new sample, is also preferably stored in the memory (e.g., storage/memory 16) associated with the processing device 12. The digital version of the new sample image can be generated by the processing device 12 applying changes to the pixel in the original digital image. This process can continue by making incrementally larger changes to the sample image (e.g., changing two pixels, then changing five pixels, then changing 10 pixels, etc.). For each changed pixel, the change in the nerve impulse from the eyes (compared to the previous sample) is compared with the change between the new digital image data and the previous digital image data. This process can continue using several different sample images, until each nerve impulse from the eye can be matched in a one-to-one fashion to a corresponding image pixel. This matching between each nerve impulse and a corresponding image pixel constitutes a mapping between nerve impulses and images (i.e., an impulse-image mapping), which can be used as the imagination-data mapping.
In certain embodiments, the mapping function is stored as, or together with, a configuration table that maintains nerve-impulse-to-image and image-to-nerve-impulse conversion parameters. The configuration table includes all of the image attributes/features, including color, intensity, position, and a nerve impulse encoding value. The size of the table may be in accordance with the resolution of the image, such that for each pixel (or group of pixels), the image data of that pixel (or group of pixels) has a corresponding value for color, intensity, position, and nerve impulse code. As previously mentioned, in certain embodiments, the mapping function can be stored as an equation or a set of equations that provide a functional relationship between data and brain signals, while in other embodiments the mapping function(s) can be expressed as computer-readable code that executes a mapping algorithm which can be applied to brain signals and/or received data.
In a preferred but non-limiting implementation of the process for generating the mapping, anchor points or regions of the digital image are processed first. The anchor points include a pixel (or a group of pixels, typically made up of at least four pixels) at each of the four corners of the digital image, as well as a pixel (or group of pixels) at the center of each edge (i.e., top, bottom, left, and right) of the digital image, resulting in eight anchor points. The color and intensity of each of the eight pixels are correlated with the corresponding nerve impulses when the corresponding anchor points in the sample picture (based on the determined position of the anchor points) are viewed by the eye of the subject 100. When groups of pixels are used, the average color and intensity of the pixels in each group is calculated and set as the color and intensity of the pixel group.
The color and intensity values for the pixels are stored in a table, together with the values of the registered corresponding nerve impulses. Some or all of the pixel values for the anchor points are then changed, and the sample image displayed to the eye is correspondingly changed, and the color and intensity of each of the eight pixels are correlated with the corresponding nerve impulses when the corresponding anchor points in the sample picture are viewed by the eye. This process can be repeated several times, until the correlation between the pixels of the anchor points (either individual pixels or groups of pixels) and the corresponding nerve impulses is verified. The mapping function generation process can then proceed to changing the color and intensity values of selected pixels or groups of pixels that are non-anchor pixels. The changes can be made according to a particular pre-defined sequence, which can include the sequence of color and intensity values for the selected pixels, and then the sequence of selected pixels. In this way, a pixel or group of pixels is selected (according to a pixel selection sequence), and the color and intensity values of the selected pixel(s) are changed according to a color/intensity sequence, and then another pixel or group of pixels is selected (according to the pixel selection sequence) and the color and intensity values of the selected pixel(s) are changed according to the color/intensity sequence, and so on and so forth, until all combinations of color/intensity values across all pixels have been implemented and the corresponding nerve impulses have been recorded/stored (in the table).
Parenthetically, after each pixel or group of pixels is selected and the color/intensity values have been incrementally changed to produce a correlation between nerve impulses and the color/intensity values for those pixels, the accuracy of the correlation can optionally be checked by converting nerve impulses to digital image data using the partial table having the color/intensity values for the selected pixels.
The full table can then be used to convert nerve impulses (collected in response to the eye viewing a sample picture) to a digital image to produce a generated digital image. The generated digital image is then compared to a digital image stored in the memory (e.g., storage/memory 16) associated with the processing device 12 (which in certain embodiments can be generated by an imaging device camera in response to capturing an image of the sample picture). The comparison can be performed on a pixel-by-pixel basis. If the comparison yields a pixel matching that is within a preferred accuracy level (e.g., if 90% of the pixels of two images are the same), the mapping process is complete. If the comparison does not yield a pixel matching that is within the preferred accuracy level, the correlation process can be repeated, i.e., anchor points can be selected and the color/intensity values of the pixels can be incrementally changed.
The following paragraphs describe another non-limiting example method for generating an imagination-data mapping function in the context of visual imagination information. It is prefaced by saying that this method is particularly suitable for generating a mapping function that is to be used with human subjects. However, this method or a similar method may also be suitable to generate mapping functions that are to be used with other animal species, particularly intelligent animal species such as canines and primates.
Bearing this in mind, a sample image that is easily remembered (i.e., can be easily conjured in the subject's mind) is placed in front of the eyes of a subject whose brain is interfaced with the processing device 12. The sample image is preferably composed of distinct and/or easily distinguishable colors, preferably initially black and white. The sample image can be, for example, thick vertical lines with distinct spaces between the lines.
In a next step, the subject can be asked to close their eyes and recall the sample image (e.g., the black and white vertical lines) from memory or using imagination. First data corresponding to the brain signals collected by the processing device 12 in response to the subject imagining the sample image is captured. Subsequently, a change can be made to the sample image to produce a new sample image. The change is preferably a geometric change, for example, a change to the length or thickness of the vertical lines, or a tilting of the lines such that the lines are no longer vertical but slightly angled. As another example, the change can be a shape change, for example changing thick vertical lines to circles. The geometric/shape change is preferably such that the new sample image is easily distinguishable from the sample image, and can be easily remembered/imagined by the subject without confusing the new sample image with the sample image.
The changes/differences between the sample image and the new sample image can be identified/marked/logged. The new sample image can be viewed by the subject, and then the subject can be asked to recall the new sample image from memory/imagination. Second data corresponding to the brain signals collected by the processing device 12 in response to the subject imagining the new sample image is captured and compared with the first data corresponding to the sample image. The changes in the first and second data can be correlated with the changes between the sample image and the new sample image.
This process of changing the sample image and comparing changes in brain signal data can continue, until each component of the data (produced in response to the subject imagining the image) is mapped or associated with a pixel of the image.
The above-described process can also be repeated by applying a change to the colors of image. For example, if the initial sample image is a black and white image, the process can be repeated by changing the black and white image to a blue and green image, and then a red and yellow image, and so on.
As mentioned above, the above-described example methods for generating mapping functions are suitable in embodiments in which the region 103 is a part of the brain that is responsible for forming visual concepts (i.e., mental images) without sensory input, in particular where the region 103 is the input to the occipital lobe 106 from the parieto-occipital sulcus 109. In embodiments in which the region 103 is a part of the brain that is responsible for forming sound concepts (i.e., the imagination information that can be rendered or injected is sound/audio imagination information), the region 103 can include a segment along one or more of nerves which carry sound-related nerve impulses from the ears to the portion of the brain that performs auditory processing (which in humans and other species including, but not limited to, canine species, feline species, non-human primate species, and rodent species is commonly referred to as the auditory cortex). Commonly owned U.S. patent application Ser. No. 17/728,013, which is incorporated by reference in its entirety herein, describes techniques for receiving signals (corresponding to nerve impulses) from the auditory cortex (or an equivalent audial processing region of a brain) in response to a subject hearing acoustic sounds (i.e., in response to the subject's ears being provided with auditory stimuli), and processing those received signals (using a mapping function, referred to as an “impulse-sound mapping”) to generate an audio signal that is representative of the audial perception of the sound heard by the subject. U.S. patent application Ser. No. 17/728,013 additionally describes techniques for using the mapping function to process audio signals that correspond to a sound or sounds to convert the audio signal to nerve impulses, and providing those nerve impulses to the auditory cortex (or equivalent auditory processing region of the brain) such that the subject audially perceives the sound or sounds corresponding to the audio signal. It is therefore a particular feature of certain aspects according to embodiments of the present invention to leverage the mapping methodologies (for converting nerve impulse to audio signals and vice versa) described in commonly owned U.S. patent application Ser. No. 17/728,013 in order to enable conversion between brain signals/imagination information and audio signals/data. In particular, the mapping function(s) described in U.S. patent application Ser. No. 17/728,013 can be used as the imagination-data mapping according to embodiments of the present invention in which the imagination information is audio/sound imagination information (i.e., the concept is a non-acoustic sound). In other words, in certain embodiments the mapping function(s) described in U.S. patent application Ser. No. 17/728,013 are suitable for converting brain signals that represent a sound concept (i.e., a non-acoustic sound or an imagined sound) to an audio signal or signals, and vice versa.
The following paragraphs describe various methods and techniques for generating the impulse-sound mapping, which can be used as the imagination-data mapping in the context of audio/sound imagination information. Further discussion of the impulse-image mapping can be found in U.S. patent application Ser. No. 17/728,013.
According to certain embodiments, the processing device 12 can generate the impulse-sound mapping (i.e., the imagination-data mapping) by employing one or more ML or NN algorithms to learn the signal format of nerve impulses (in response to auditory stimulation of the ears), and then determining the imagination-data mapping by comparing the nerve impulse format to audio signals, including, for example, digital data stored in a memory associated with the processing device 12 and/or analog audio signals generated by a sound capture device (e.g., microphone) in response to capturing sound.
As part of one non-limiting example process for generating the mapping, an audio sample signal can be generated, which is an amplitude varying signal over some fixed time duration. The audio sample signal is an analog signal that may consist of multiple frequency components corresponding to various sounds (frequency tones), which can be isolated using frequency analysis techniques, e.g., Fourier analysis, including Fast Fourier Transform (FFT). Sound vibrations from the audio sample signal are captured by the ears of the subject 100 and the processing device 12 collects the nerve impulses sent from the ears to the auditory region of the brain 102 (along the acoustic nerves) in response to hearing the sample audio.
Parenthetically, the “acoustic nerve” refers herein to any nerve or nerve segment that can transmit pulses (i.e., nerve impulses), converted from mechanical waves (for example vibrations) detected by the ear or ears, to the brain 102 (in particular the auditory region of the brain, e.g., the auditory cortex) so as to be interpreted and perceived by the brain (and hence by the subject) as sound.
Subsequently, the same audio sample can be played such that the sample is captured by a sound capture device connected to the processing device 12. The processing device 12 collects the audio signals transmitted from the sound capture device to the processing device 12, and analyzes/processes the audio sample signal. The analysis/processing can include, for example, digitization (sampling and quantization) and/or frequency analysis (e.g., FFT). Subsequently, a small change to one or more of the signal characteristics can be made to the audio sample signal, for example by changing one or more of the frequency components or an amplitude value of the audio sample signal, to produce a new audio sample signal. The sound vibration from the new audio sample signal is captured by the ears, and the processing device 12 collects the nerve impulses sent from the ears to the auditory region of the brain 102 (along the acoustic nerves) in response to hearing the new audio sample signal. The same new audio sample signal can then be played such that the sample is captured by the sound capture device, and the processing device 12 collects the audio signals transmitted from the sound capture device to the processing device 12. The processing device 12 analyzes/processes the new audio sample signal (e.g., via digitization and/or FFT). This process can continue by changing the characteristics of the audio sample signal either individually one at a time (e.g., changing a single frequency component, or changing an instantaneous amplitude value), or in incrementally larger groups of signal characteristics (e.g., changing multiple frequency components and/or changing multiple instantaneous amplitude values). For each change to the audio sample signal, the change in the nerve impulse from the ears (compared to the previous sample) is compared with the change in the audio signals collected by the processing device 12 from the sound capture device. This process can continue until each nerve impulse from the ear can be matched to a corresponding audio signal component (e.g., sound) transmitted by the sound capture device. This matching between each nerve impulse and a corresponding audio signal component constitutes a mapping between nerve impulses and sounds (i.e., an impulse-sound mapping), which can be used as the imagination-data mapping. Note that the changes to the audio sample signal should preferably cover multiple combinations of sounds (frequency tones), more preferably sounds over any given range of amplitudes and/or frequencies.
It is noted herein that the processing device 12 can employ various techniques for obtaining brain signals (for example, nerve impulses or bio-electrical signals) from the region 103 of the brain 102 and for providing brain signals (converted from data) to the region 103. Such techniques may typically rely on employing microdevices, such as microelectrodes or microtransducers, for measuring (receiving) brain signals, and/or for inducing transmission of brain signals along a segment of a transmission route (e.g., a nerve path) that is an input to the region 103 (e.g., the transmission route 108 of
In certain embodiments, the brain signals include, or are in the form of, nerve impulses, whereby inducement of transmission includes inducing transmission of nerve impulses along nerve paths or segments of nerves, for example nerves that form the transmission route 108.
Various entities have conducted research, development, and experimentation on connection and interfacing of computer processing devices to the brain, tissue, and nerves via implantation or other invasive or semi-invasive means. One example of such research can be found in a publication by the University of Luxembourg in 2019 entitled “CONNECT—Developing nervous system-on-a-chip” (available at https://wwwfr.uni.lu/lcsb/research/developmental_and_cellular_biology/news/connect_developing_nervous_system_on_a_chip), which describes culturing individual nervous system components and connecting the components in a microfluid chip (integrated circuit).
Examples of research and experimentation in the field of brain-machine interfacing is described in an article published in Procedia Computer Science in 2011, entitled “Brain-Chip Interfaces: The Present and The Future” by Stefano Vassanelli at the NeuroChip Laboratory of the University of Padova in Italy. In one example, computerized processing devices are interfaced to neurons with metal microelectrodes or oxide-insulated electrical microtransducers (e.g., electrolyte-oxide-semiconductor field-effect transistors (EOSFETs) or Electrolyte-Oxide-Semiconductor-Capacitors (EOSCs)) to record (i.e., measure) or stimulate neuron electrical activity. In another example, large-scale high-resolution recordings (i.e., measurements) from individual neurons are obtained using a processing device that either employs or is coupled to a microchip featuring a large Multi-Transistor-Array (MTA). In yet a further example, a microchip featuring a large MTA is used to interface with the cells in vitro by deploying the MTA in contact with brain tissue, where the signals corresponding to nerve impulses are, in one example, in the form of local-field-potentials (LFPs).
An example of a brain-machine interface device is the Neuralink device, developed by Neuralink Corporation of San Francisco, USA. The Neuralink device includes an ASIC that digitizes information obtained from neurons via microelectrodes.
Bearing the above in mind, the following paragraphs provide a high-level description of various non-limiting implementations of the interface 18 that can be used for connecting/interfacing the processing device 12 with the subject 100 so as to provide a machine-subject interface, according to non-limiting example embodiments of the present invention.
With continued reference to
In embodiments in which the processing device 12 is operative to convert the data (representative of a concept) to brain signals, and to provide/transmit the brain signals to the region 103 of the brain 102 such that the brain signals are interpreted by the brain 102 and the concept (image or sound) represented by the data is formed by the region 103, the transmission of the brain signals may be effectuated by stimulation of the region 103 (for example stimulation of one or more neurons of the nerves) by a microdevice, e.g., the electrode array 22 (or a transducer). Generally speaking, in such embodiments the processing device 12 can convert (using the imagination-data mapping) data to brain signals (or electrical signals that represent brain signal activity) that are to be transmitted by the region 103. The processing device 12 then provides the brain signals to the region 103 to induce transmission of the brain signals (or provides the electrical impulses to the region 103 to induce transmission of the brain signals represented by electrical impulses). In certain embodiments, the inducing of transmission can be effectuated by the processing device 12 providing electrical signals to the electrode array 22 (or a transducer), which stimulates the neurons at the region 103 in accordance with the electrical signals so as to induce transmission of corresponding nerve impulses.
In certain embodiments, the wireless transmission can be RF signal transmission. In such embodiments, the transmitter circuitry and components of the Tx unit 24 can include, for example, signal transmission electronics and components such as one or more antenna, digital-to-analog conversion circuitry, signal modulators, filters, amplifiers, etc., and the receiver circuitry and components of the Rx unit 26 can include, for example, signal reception electronics and components such as one or more antennas, filters, amplifiers, demodulators, etc. In other embodiments, the wireless transmission can be indictive signal transmission whereby the Tx unit 24 and the Rx unit 26 are operative to transmit and receive, respectively, using inductive signal transmission means. In such embodiments, for example, the Tx unit 24 can include inductive coils, and the Rx unit 26 can include an induction receiver.
As mentioned above, in certain embodiments the interface 18 can provide non-contact or non-invasive contact between the processing device 12 and the region of the brain 103. For example, the interface 18 can include, for example, an optical magnetic field sensor arrangement or a non-contact modulation arrangement employing, for example, optic, magnetic, or ultrasound techniques.
In certain embodiments, in particular embodiments in which the processing device 12 is implemented as a biological processor or biological processing element that is cultured or grown in the subject, the interface 18 is the processing device 12 itself.
It is noted that in certain embodiments, the interfacing arrangement 18 can include multiple interfaces. For example, a first interface can be used to effectuate conversion of data to brain signals. The first interface can employ an electrode array 22 or microtransducers (implemented, for example, as EOSCs) connected or linked to the processing device 12 via a wired connection (for example as shown in
As another example, a set of four interfaces can be used. A first interface can be used to effectuate conversion of image data to brain signals (for visual/image imagination injection), and a second interface can be used to effectuate conversion of brain signals to image data (for visual/image imagination rendering). A third interface can be used to effectuate conversion of data (in the form of audio signals) to brain signals (for sound imagination injection), and a fourth interface can be used to effectuate conversion of brain signals to audio signal data (for sound imagination rendering).
Referring now again to
In one example, the control unit 15 allows the user to define the rules or handling criteria that determine the at least one operation performed on generated data by the processing device 12, as well as to select the handling rule and/or change from the selected rule to another rule. For example, the user can define a set of rules according to which the processing device 12 operates. As an additional example, the user can select an existing rule/set of rules (e.g., data storage rules, data modification rules, output rules) or a newly defined rule/set of rules such that the processing device 12 operates according to the selected rule(s) (e.g., a set of data storage rules (criteria), a set of data modification (manipulation) rules, or a set of output rules (criteria)). In addition, the user can select, via the control unit 15, parameters related to the defined rules. For example, if the user selects that the processing device 12 is to operate according to a set of data modification (manipulation) rules, the user can select how the generated data is to be modified, including selecting any additional images or sounds that are to be used to modify generated data. These additional images or sounds can be received from various sources, including, for example, a computer memory associated with the processing device 12 that stores images or sounds in digital form, an image capture device, an audio capture device or an input device such as a microphone or audio player, and the like.
As another example, if the user selects that the processing device 12 is to operate according to a set of data storage rules, the user can select the memory device (e.g., storage/memory 16, external storage/memory 32, server system 34) for storing data that is representative of the generated data, and may also select which portions (segments or sub-samples) of the data are to be stored on which memory device (e.g., the user can select some of the data to be stored locally in storage/memory 16, and select other parts of the data to be stored remotely at server system 34).
The control unit 15 also preferably allows the user to select data that is/are to be converted to brain signals by the processing device 12. The selection can be applied via a menu that is part of the user input interface of the control unit 15. The menu may include a list of images or digital audio tracks or sounds that are stored in a memory associated with the processing device 12. In addition, the control unit 15 preferably allows the user to adjust and set the rate at which brain signals, converted from data by the processing device 12, are provided to the region 103. The rate setting can be applied via the user input interface of the control unit 15.
In certain preferred embodiments, the control unit 15 provides selective switching between different operational modes of the system 10 in response to user input. For example, the control unit 15 can selectively actuate the processing device 12 to retrieve data (images or audio) from a memory (e.g., storage/memory 16, storage/memory 32, a server system 34) or external device (e.g., image capture device or audio capture device). As such, the control unit 15 can enable the user to control if and when data (e.g., image data or audio signals) from a memory (e.g., storage/memory 16, storage/memory 32, a server system 34) or external device are converted to brain signals, and/or if and when such converted brain signals are provided/transmitted to the region 103. In this way, the user can control if and when the subject imagines injected images or sounds.
The control unit 15 is a computerized control unit that includes one or more computer processors coupled to a computerized storage medium (e.g., memory). The one or more processors can be implemented as any number of computerized processors, including, but not limited to, as microprocessors, microcontrollers, ASICs, FPGAs, DSPs, FPLAs, state machines, biological processors, and the like. In microprocessor implementations, the microprocessors can be, for example, conventional processors, such as those used in servers, computers, and other computerized devices. For example, the microprocessors may include x86 Processors from AMD and Intel, Xeon® and Pentium® processors from Intel. The aforementioned computerized processors include, or may be in electronic communication with computer readable media, which stores program code or instruction sets that, when executed by the computerized processor, cause the computerized processor to perform actions. Types of computer readable media include, but are not limited to, electronic, optical, magnetic, or other storage or transmission devices capable of providing a computerized processor with computer readable instructions. The storage/memory of the control unit 15 can be any conventional storage media or any application specific or special purpose storage media and can be implemented in various ways, including, for example, one or more volatile or non-volatile memory, a flash memory, a read-only memory, a random-access memory, and the like, or any combination thereof. In certain embodiments, the storage/memory of the control unit 15 can store machine executable instructions that can be executed by the one or more processors of the control unit 15.
In certain embodiments, the processing device 12 and the control unit 15 share one or more common processors, such that the processing device 12 is operative to perform both processing and control functionality. In other sometimes more preferable embodiments, the control unit 15 and the processing device 12 are separate electronic devices that are electronically connected via a wired or wireless connection. In such embodiments, the control unit 15 can be implemented as a user computer device, which includes, for example, mobile computing devices including but not limited to laptops, smartphones, and tablets, and stationary computing devices including but not limited to desktop computers.
In other embodiments, the control unit 15 is implemented via application software executed on an electronic device, such as a mobile communication device (e.g., smartphone, tablet, etc.) or computer device (e.g., laptop, desktop, etc.). In embodiments in which the control unit 15 is implemented on a smartphone, tablet, laptop, etc., the software application can provide a user input interface. In certain embodiments, the control unit 15 provides control via direct wired connection or indirect wireless connection to the processing device 12.
In certain embodiments, control functionality of the operation of the processing device 12 and the system 10 is provided algorithmically, for example, by artificial intelligence algorithms executed by the processing device 12, such that the system 10 operates automatically without necessitating any function specific control unit.
Although the embodiments of the present invention described thus far have pertained to a single processing device 12, embodiments employing multiple such processing devices are contemplated herein. For example, a first processing device can be deployed to interface with the input to the occipital lobe 106 from the parieto-occipital sulcus 109 in order to perform for visual imagination rendering and injection, and a second processing device can be deployed to interface with the auditory region of the brain (that performs auditory processing) in order to perform audio/sound imagination rendering and injection.
In addition, although the embodiments of the present invention described thus far have pertained to a processing device 12 that performs brain signal to data conversion and data to brain signal conversion, other embodiments are possible in which the tasks of conversion of brain signals to data and the conversion of data to brain signals are subdivided amongst various processors or processing devices/components. In fact, the processing of brain signals and data as described herein can be performed by any number of processors selected from a plurality of distributed processors which together form a processing subsystem. One or more of the processors can be local to the processing device 12 (and the subject 100) and one or more of the processors can be remote from the processing device 12. Alternatively, only remote processors can be used for processing brain signals and data.
In one example, in the context of the embodiment illustrated in
Thus, generally speaking, the processing subsystem 200 (which by definition includes at least one processor) is configured to perform the tasks of the processing device 12 described above, including one or more of: receiving/obtaining brain signals that are representative of concepts formed by the region 103 of the brain, processing the received/obtained brain signals to convert the concepts to generate data, performing at least one operation on the generated data according to one or more rules, receiving/obtaining data, processing the received/obtained data to convert the received/obtained data to brain signals, and providing the brain signals to the region 103 of the brain.
Although the embodiments of the present invention are of particular use when applied within the context of human imagination and thought, embodiments of the present disclosure may be equally applicable to imagination in non-human animal subjects, including, but not limited to, other primate species (e.g., monkeys, gorillas, etc.), canine species, feline species, reptile species, bird species, marine/aquatic species, etc. In such non-human applications, brain signals can be collected via the same or similar interfacing methods discussed above, and converted to data by the processing device 12 using a species-specific imagination-data mapping. As mentioned above, for a given animal species, the corresponding brain region that is responsible for forming visual or sound concepts can be identified using brain scanning techniques such as magnetic resonance imaging.
Any resultant data can, for example, be output to another system for further processing or use. For example, the images or audio signals generated from brain signals in a canine subject can be provided for display or playback to be seen or heard by a human subject, or can be converted to brain signals using a human imagination-data mapping function and provided to the region 103 of a human subject such that the human subject can imagine the images or sounds as perceived by the canine subject.
Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
For example, hardware for performing selected tasks according to embodiments of the invention could be implemented as a chip or a circuit. As software, selected tasks according to embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In an exemplary embodiment of the invention, one or more tasks according to exemplary embodiments of method and/or system as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions. Optionally, the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, non-transitory storage media such as a magnetic hard-disk and/or removable media, for storing instructions and/or data. Optionally, a network connection is provided as well. A display and/or a user input device such as a keyboard or mouse are optionally provided as well.
For example, any combination of one or more non-transitory computer readable (storage) medium(s) may be utilized in accordance with the above-listed embodiments of the present invention. A non-transitory computer readable (storage) medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
As will be understood with reference to the paragraphs and the referenced drawings, provided above, various embodiments of computer-implemented methods are provided herein, some of which can be performed by various embodiments of apparatuses and systems described herein and some of which can be performed according to instructions stored in non-transitory computer-readable storage media described herein. Still, some embodiments of computer-implemented methods provided herein can be performed by other apparatuses or systems and can be performed according to instructions stored in computer-readable storage media other than that described herein, as will become apparent to those having skill in the art with reference to the embodiments described herein. Any reference to systems and computer-readable storage media with respect to the following computer-implemented methods is provided for explanatory purposes, and is not intended to limit any of such systems and any of such non-transitory computer-readable storage media with regard to embodiments of computer-implemented methods described above. Likewise, any reference to the following computer-implemented methods with respect to systems and computer-readable storage media is provided for explanatory purposes, and is not intended to limit any of such computer-implemented methods disclosed herein.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. For example, reference to a single nerve can also refer to both nerves of a nerve pair. Furthermore, reference to both nerves of a nerve pair can also refer to a single nerve, unless the context clearly dictates otherwise.
The word “exemplary” is used herein to mean “serving as an example, instance or illustration”. Any embodiment described as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
The above-described processes including portions thereof can be performed by software, hardware and combinations thereof. These processes and portions thereof can be performed by computers, computer-type devices, workstations, processors, micro-processors, other electronic searching tools and memory and other non-transitory storage-type devices associated therewith. The processes and portions thereof can also be embodied in programmable non-transitory storage media, for example, compact discs (CDs) or other discs including magnetic, optical, etc., readable by a machine or the like, or other computer usable storage media, including magnetic, optical, or semiconductor storage, or other source of electronic signals.
The processes (methods) and systems, including components thereof, herein have been described with exemplary reference to specific hardware and software. The processes (methods) have been described as exemplary, whereby specific steps and their order can be omitted and/or changed by persons of ordinary skill in the art to reduce these embodiments to practice without undue experimentation. The processes (methods) and systems have been described in a manner sufficient to enable persons of ordinary skill in the art to readily adapt other hardware and software as may be needed to reduce any of the embodiments to practice without undue experimentation and using conventional techniques.
To the extent that the appended claims have been drafted without multiple dependencies, this has been done only to accommodate formal requirements in jurisdictions which do not allow such multiple dependencies. It should be noted that all possible combinations of features which would be implied by rendering the claims multiply dependent are explicitly envisaged and should be considered part of the invention.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10123133, | Nov 23 2012 | OTICON A S | Listening device comprising an interface to signal communication quality and/or wearer load to wearer and/or surroundings |
10264990, | Oct 26 2012 | The Regents of the University of California | Methods of decoding speech from brain activity data and devices for practicing the same |
10441172, | Aug 05 2015 | Seiko Epson Corporation | Brain image reconstruction apparatus |
10925509, | May 28 2018 | The Governing Council of the University of Toronto | System and method for generating visual identity and category reconstruction from electroencephalography (EEG) signals |
11071465, | Sep 20 2016 | PARADROMICS, INC | Systems and methods for detecting corrupt or inaccurate sensory representations |
11395620, | Jun 03 2021 | Methods and systems for transformation between eye images and digital images | |
11467665, | Jun 14 2018 | GRIBETZ, MERON | Virtual user interface system and methods for use thereof |
5178161, | Sep 02 1988 | The Board of Trustees of the Leland Stanford Junior University | Microelectronic interface |
6175767, | Apr 01 1998 | Multichannel implantable inner ear stimulator | |
7991475, | Jun 08 2005 | The Regents of the University of California | High density micromachined electrode arrays useable for auditory nerve implants and related methods |
8369958, | May 19 2005 | Cochlear Limited | Independent and concurrent processing multiple audio input signals in a prosthetic hearing implant |
9480582, | Mar 23 2005 | OSSUR HF | System and method for conscious sensory feedback |
20040172098, | |||
20050283202, | |||
20080161915, | |||
20080242950, | |||
20080319276, | |||
20090216091, | |||
20100026798, | |||
20110144471, | |||
20110227586, | |||
20130184558, | |||
20140214120, | |||
20140267401, | |||
20150119689, | |||
20150142082, | |||
20150248470, | |||
20160073887, | |||
20160109851, | |||
20170087367, | |||
20180021579, | |||
20180200389, | |||
20180245097, | |||
20180304095, | |||
20190082990, | |||
20190227490, | |||
20190336060, | |||
20190357797, | |||
20200196932, | |||
20210293714, | |||
20220417678, | |||
CN103705229, | |||
CN204520668, | |||
WO2020242888, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Date | Maintenance Fee Events |
Jul 13 2022 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Jul 20 2022 | MICR: Entity status set to Micro. |
Jul 20 2022 | SMAL: Entity status set to Small. |
May 31 2023 | SMAL: Entity status set to Small. |
Date | Maintenance Schedule |
Aug 22 2026 | 4 years fee payment window open |
Feb 22 2027 | 6 months grace period start (w surcharge) |
Aug 22 2027 | patent expiry (for year 4) |
Aug 22 2029 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 22 2030 | 8 years fee payment window open |
Feb 22 2031 | 6 months grace period start (w surcharge) |
Aug 22 2031 | patent expiry (for year 8) |
Aug 22 2033 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 22 2034 | 12 years fee payment window open |
Feb 22 2035 | 6 months grace period start (w surcharge) |
Aug 22 2035 | patent expiry (for year 12) |
Aug 22 2037 | 2 years to revive unintentionally abandoned end. (for year 12) |