A system for indexing displayed elements that is useful for accessing and understanding new or difficult materials, in which a user highlights unknown words or characters or other displayed elements encountered while viewing displayed materials. In a language learning application, the system displays the meaning of a word in context; and the user may include the word in a personal vocabulary to build a database of words and phrases. In a Japanese language application, one or more Japanese language books are read on an electronic display. Readings (‘yomi’) for all words are readily viewable for any selected word or phrase, as well as an English reference to the selected word or phrase. Extensive notes are provided for difficult phrases and words not normally found in a dictionary. A unique indexing scheme allows word-by-word access to any of several external multi-media references.

Patent
   RE40731
Priority
Feb 16 1994
Filed
Feb 24 2005
Issued
Jun 09 2009
Expiry
Feb 16 2014
Assg.orig
Entity
Small
9
50
all paid
0. 96. A method for linking textual source material to external reference materials for display, the method comprising the steps of:
determining a beginning position address of textual source material stored in an electronic database;
cutting the textual source material into a plurality of discrete pieces;
determining a starting point address and an ending point address of at least one of the plurality of discrete pieces based upon the beginning position address;
recording in a look up table the starting and ending point addresses;
linking at least one of the plurality of discrete pieces to at least one of a plurality of external reference materials by recording in the look-up table, along with the starting and ending point addresses of the at least one of the plurality of discrete pieces, a link to the at least one of the plurality of external reference materials, the plurality of external reference materials comprising any of textual, audio, video, and picture information;
displaying an image of the textual source material;
selecting a discrete portion of the displayed textual source material image;
determining a display address of the selected discrete portion;
converting the display address of the selected discrete portion to an offset value from the beginning position address;
comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces;
selecting one of the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces;
retrieving the selected one of the plurality of external reference materials using a recorded link to the selected one of the plurality of external reference materials; and
displaying the retrieved external reference material.
16. In a language learning method, a method for linking source material to external reference materials for display, the method comprising the steps of:
determining the a beginning position address of a text image, said text image including a plurality of discrete pieces having links to external reference materials comprising any of textual, audio, video, and picture information ;
cutting said source material image the text into said a plurality of discrete pieces;
determining a starting point address and an ending point address of said at least one of the plurality of discrete pieces of said image based upon said the beginning position address of said text image ;
recording said in a look-up table the starting and said ending point addresses in a look-up table ;
linking at least one of the plurality of discrete pieces to at least one of a plurality of external reference materials by recording in the look-up table, along with the starting and ending point addresses of the at least one of the plurality of discrete pieces, a link to the at least one of the plurality of external reference materials, the plurality of external reference materials comprising any of textual, audio, video, and picture information;
displaying an image of the cut text;
selecting a discrete portion of said the displayed text image;
determining the a display address of said the selected discrete portion;
converting said the display address of said the selected discrete portion to an offset value from said the beginning position of said text image address;
comparing said the offset value with said recorded the starting and ending point addresses of said discrete pieces recorded in said the look-up table to identify one of the plurality of discrete pieces;
selecting an external reference that corresponds to said look-up table start and end point address one of the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces; and
retrieving the selected one of the plurality of external reference materials using a recorded link to the selected one of the plurality of external reference materials; and
displaying said the retrieved external reference material.
8. A method for linking source material to external reference materials for display, the method comprising the steps of:
determining the a beginning position address of a source material image stored in an electronic database, said source material image including a plurality of discrete pieces having links to external reference materials comprising any of textual, audio, video, and picture information ;
cutting said the source material image into said a plurality of discrete pieces;
determining a starting point address and an ending point address of said at least one of the plurality of discrete pieces of said image based upon said the beginning position address of said source material image ;
recording said in a look up table the starting and said ending point addresses in a look up table ;
linking at least one of the plurality of discrete pieces to at least one of a plurality of external reference materials by recording in the look-up table, along with the starting and ending point addresses of the at least one of the plurality of discrete pieces, a link to the at least one of the plurality of external reference materials, the plurality of external reference materials comprising any of textual, audio, video, and picture information;
displaying an image of the source material;
selecting a discrete portion of said the displayed source material image;
determining the a display address of said the selected discrete portion;
converting said the display address of said the selected discrete portion to an offset value from said the beginning position address of said source material image ;
comparing said the offset value with said recorded the starting and ending point addresses of said discrete pieces recorded in said the look-up table to identify one of the plurality of discrete pieces;
selecting an external reference that corresponds to said look-up table start and end point addresses one of the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces; and
retrieving the selected one of the plurality of external reference materials using a recorded link to the selected one of the plurality of external, reference materials; and
reproducing said displaying the retrieved external reference material.
0. 95. A system for linking textual source material to external reference material for display, the system comprising:
means for determining a beginning position address of textual source material stored in an electronic database;
means for cutting the textual source material into a plurality of discrete pieces;
means for determining a starting point address and an ending point address of at least one of the plurality of discrete pieces based upon the beginning position address;
means for recording in a look-up table the starting and ending point addresses;
means for linking at least one of the plurality of discrete pieces to at least one of a plurality of external reference materials by recording in the look-up table, along with the starting and ending point addresses of the at least one of the plurality of discrete pieces, a link to the at least one of the plurality of external reference materials, the plurality of external reference materials comprising any of textual, audio, video, and picture information;
means for displaying an image of the textual source material;
means for selecting a discrete portion of the displayed textual source material image;
means for determining a display address of the selected discrete portion;
means for converting the display address of the selected discrete portion to an offset value from the beginning position address;
means for comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces;
means for selecting one of the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces;
means for retrieving the selected one of the plurality of external reference materials using a recorded link to the selected one of the plurality of external reference materials; and
means for displaying the retrieved external reference material.
9. In a language learning method, a method for linking source material to external reference materials for display, the method comprising the steps of:
reading a foreign language source material image including a plurality of discrete pieces having links to external reference materials comprising any of textual, audio, video, and picture information with an electronic viewer;
accessing reference materials on selected portions of said source material image;
determining the a beginning position address of said a foreign language source material image ;
cutting said the source material image into said a plurality of discrete pieces;
determining a starting point address and an ending point address of said at least one of the plurality of discrete pieces of said image based upon said the beginning position address of said source material image ;
recording said in a look-up table the starting and said ending point addresses in a look-up table ;
linking at least one of the plurality of discrete pieces to at least one of a plurality of external reference materials by recording in the look-up table, along with the starting and ending point addresses of the at least one of the plurality of discrete pieces, a link to the at least one of the plurality of external reference materials, the plurality of external reference materials comprising any of textual, audio, video, and picture information;
displaying an image of the source material;
selecting a discrete portion of said the displayed source material image;
determining the a display address of said the selected discrete portion;
converting said the display address of said the selected discrete portion to an offset value from said the beginning position address of said source material image ;
comparing said the offset value with said recorded the starting and ending point addresses of said discrete pieces recorded in said the look-up table to identify one of the plurality of discrete pieces;
selecting an external reference that corresponds to said look-up table start and end point addresses one of the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces; and
retrieving the selected one of the plurality of external reference materials using a recorded link to the selected one of the plurality of external, reference materials; and
reproducing said displaying the retrieved external reference material.
1. A system for linking source material to external reference materials for display, the system comprising:
a source material image including a plurality of discrete pieces having links to external reference materials comprising any of textual, audio, video, and picture information, said source material image stored in an electronic database;
means for determining an a beginning position address on said of a source material stored in an electronic database for the beginning position of said source material image ;
means for cutting said the source material image into said a plurality of discrete pieces;
means for determining a starting point address and an ending point address on said electronic database for a start point and an end point of said of at least one of the plurality of discrete pieces of said image based upon said the beginning position of said source material image address;
means for recording said in a look-up table the starting and said ending point addresses in a look-up table ;
means for linking at least one of the plurality of discrete pieces to at least one of a plurality of external reference materials by recording in the look-up table, along with the starting and ending point addresses of the at least one of the plurality of discrete pieces, a link to the at least one of the plurality of external reference materials, the plurality of external reference materials comprising any of textual, audio, video, and picture information;
means for displaying an image of the source material;
means for selecting a discrete portion of said the displayed source material image;
means for determining the a display address on said electronic database of said the selected discrete portion;
means for converting said the display address of said the selected discrete portion to an offset value from said the beginning position address of said source material image ;
means for comparing said the offset value with said recorded the starting and ending point addresses of said discrete pieces recorded in said the look-up table to identify one of the plurality of discrete pieces;
means for selecting an external reference that corresponds to said look-up table start and end point address one of the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces; and
means for retrieving the selected one of the plurality of external reference materials using a recorded link to the selected one of the plurality of external reference materials; and
means for reproducing said displaying the retrieved external reference material.
15. In a language learning system, a system for linking source material to external reference materials for display, the system comprising:
a text image including a plurality of discrete pieces having links to external reference materials comprising any of textual, audio, video, and picture information;
means for determining the a beginning position address of said text image ;
means for cutting said the text image into said a plurality of discrete pieces;
means for determining a starting point address and an ending point address of said at least one of the plurality of discrete pieces of said image based upon said the beginning position address of said source material image ;
means for recording said in a look-up table the starting and said ending point addresses in a look-up table ;
means for linking at least one of the plurality of discrete pieces to at least one of a plurality of external reference materials by recording in the look-up table, along with the starting and ending point addresses of the at least one of the plurality of discrete pieces, a link to the at least one of the plurality of external reference materials, the plurality of external reference materials comprising any of textual, audio, video, and picture information;
means for displaying an image of the text;
means for selecting a discrete portion of said the displayed text image;
means for selecting a discrete portion of said the displayed
means for determining the a display address of said the selected discrete portion;
means for converting said the display address of said the selected discrete portion to an offset value from said the beginning position address of said source material image ;
means for comparing said the offset value with said recorded the starting and ending point addresses of said discrete pieces recorded in said the look-up table to identify one of the plurality of discrete pieces;
means for selecting an external reference that corresponds to said look-up table start and end point address one of the plurality of external reference materials corresponding to the identified one of the plurality of discrete pieces; and
means for retrieving the selected one of the plurality of external reference materials using a recorded link to the selected one of the plurality of external reference materials; and
means for displaying said the retrieved external reference material.
2. The system of claim 1, further comprising:
a linking engine for wherein the means for linking said source material to said reference information links at least one of the plurality of discrete pieces to at least one of a plurality of external reference materials on any of a word-by-word and or phrase-by-phrase basis.
3. The system of claim 2, said linking engine further comprising:
word cut means for dividing said source material into discrete pieces;
linking means for establishing at least one link between each of said discrete pieces and said reference information;
compiler means for assembling an integrated compiling the source material image from said at least the plurality of discrete pieces; and
indexing means for linking said assembled indexing at least one of the plurality of discrete pieces to said and corresponding links to the plurality of external reference information materials.
4. The system of claim 3, said linking engine further comprising:
means for building an index to link each of said source material pieces to said reference information the look-up table from the indexed discrete pieces and the corresponding links to the plurality of external reference materials.
5. The system of claim 4, wherein said index the look-up table links said source material pieces to said reference information the identified one of the plurality of discrete pieces to at least a corresponding one of a plurality of external reference materials based upon the offset value of the offset of the starting and ending position addresses of said source material pieces from the beginning position address of said integrated source image .
6. The system of claim 5, wherein said offset locates said reference information to a corresponding source material piece the identified one of the plurality of discrete pieces is identified based upon the offset occurrence value being within a range defined by the value of the offsets of the starting and ending point addresses of said source material pieces from said beginning position address of said integrated source image the identified one of the plurality of discrete pieces.
7. The system of claim 1, further comprising:
means for manipulating said stored the source material image and the plurality of external reference information materials with at least two user keys.
10. The method of claim 9, further comprising the step of wherein the step of linking comprises:
linking said source material to said reference information with a linking engine at least one of the plurality of discrete pieces to at least a corresponding one of a plurality of external reference materials on any of a word-by-word and or phrase-by-phrase basis.
11. The method of claim 10, said linking step further comprising the steps of:
dividing said source material into discrete pieces;
establishing at least one link between each of said discrete pieces and said reference information;
assembling an integrated compiling the source material image from said at least the plurality of discrete pieces; and
linking said assembled indexing at least one of the plurality of discrete pieces to said in the source material image and the corresponding links to the plurality of external reference information materials.
12. The method of claim 11, said linking step further comprising the step of:
building an index to link each of said source material pieces to said reference information the look-up table from the indexed discrete pieces of the source material image and the corresponding links to the plurality of external reference materials.
13. The method of claim 12, wherein said index the look-up table links said source material pieces to said reference information the identified one of the plurality of discrete pieces to at least a corresponding one of a plurality of external reference materials based upon the offset between the starting position address for said source material pieces and the beginning position address of said integrated source image value.
14. The method of claim 13, wherein said offset locates said reference information to a corresponding source material piece the identified one of the plurality of discrete pieces is identified based upon the offset occurrence value being within a range defined by the value of the offsets of the starting and ending position point addresses of said source material pieces from said beginning position address of said integrated source image the identified one of the plurality of discrete pieces.
0. 17. The method of claim 8, wherein cutting the source material into a plurality of discrete pieces is done manually.
0. 18. The method of claim 8, wherein cutting the source material into a plurality of discrete pieces is done automatically.
0. 19. The method of claim 18, wherein automatically cutting the source material into a plurality of discrete pieces is done using a grammar parser.
0. 20. The method of claim 18, wherein automatically cutting the source material into a plurality of discrete pieces is done without using tags.
0. 21. The method of claim 18, wherein automatically cutting the source material into a plurality of discrete pieces is done without reference to any tags which may be located in the source material.
0. 22. The method of claim 8, wherein the link is a hyperlink.
0. 23. The method of claim 8, wherein the link is an address of the selected one of the plurality of external reference materials.
0. 24. The method of claim 8, wherein the link is reference information for retrieving the selected one of the plurality of external reference materials.
0. 25. The method of claim 8, wherein determining a display address of the selected discrete portion is done without using tags.
0. 26. The method of claim 8, wherein determining a display address of the selected discrete portion is done without reference to any tags which may be located in the source material.
0. 27. The method of claim 8, wherein determining a display address of the selected discrete portion is done without reference to any hierarchical information which may be located in the source material.
0. 28. The method of claim 8, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without using tags.
0. 29. The method of claim 8, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without reference to any tags which may be located in the source material.
0. 30. The method of claim 8, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without reference to any hierarchical information which may be located in the source material.
0. 31. The method of claim 8, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without using tags.
0. 32. The method of claim 8, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without reference to any tags which may be located in the source material.
0. 33. The method of claim 8, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without reference to any hierarchical information which may be located in the source material.
0. 34. The method of claim 8, wherein retrieving the selected one of the plurality of external reference materials is done using a hyperlink.
0. 35. The method of claim 8, wherein retrieving the selected one of the plurality of external reference materials is done using an address of the at least one external reference.
0. 36. The method of claim 8, wherein the source material is text based source material.
0. 37. The method of claim 8, wherein the source material is image based source material.
0. 38. The method of claim 8, wherein the source material is graphic based source material.
0. 39. The method of claim 8, wherein the source material is audio based source material.
0. 40. The method of claim 8, wherein the source material is video based source material.
0. 41. The method of claim 8, wherein the source material is a combination of two or more of text based source material, image based source material, graphic based source material, audio based source material, and video based source material.
0. 42. The method of claim 8, wherein the plurality of external reference materials comprises a plurality of text based external reference materials.
0. 43. The method of claim 8, wherein the plurality of external reference materials comprises a plurality of image based external reference materials.
0. 44. The method of claim 8, wherein the plurality of external reference materials comprises a plurality of graphic based external reference materials.
0. 45. The method of claim 8, wherein the plurality of external reference materials comprises a plurality of audio based external reference materials.
0. 46. The method of claim 8, wherein the plurality of external reference materials comprises a plurality of video based external reference materials.
0. 47. The method of claim 8, wherein at least one of the plurality of external reference materials is a combination of two or more of text based external reference material, image based external reference material, graphic based external reference material, audio based external reference material, and video based external reference material.
0. 48. The method of claim 8, wherein linking at least one of the plurality of discrete pieces is done manually.
0. 49. The method of claim 8, wherein linking at least one of the plurality of discrete pieces is done automatically.
0. 50. The method of claim 8, wherein the electronic database is an electronic relational database.
0. 51. The method of claim 8, wherein the electronic database is an electronic file.
0. 52. The method of claim 8, wherein the electronic database is an electronic text.
0. 53. The method of claim 8, wherein the beginning position address is a beginning location of the source material in the electronic database.
0. 54. The method of claim 53, wherein each starting point address is a starting location of at lest one of the plurality of discrete pieces based upon the beginning location of the source material.
0. 55. The method of claim 53, wherein the ending point address is an ending location of at least one of the plurality of discrete pieces based upon the beginning location of the source material.
0. 56. The system of claim 1, wherein cutting the source material into a plurality of discrete pieces is done manually.
0. 57. The system of claim 1, wherein cutting the source material into a plurality of discrete pieces is done automatically.
0. 58. The system of claim 57, wherein automatically cutting the source material into a plurality of discrete pieces is done using a grammar parser.
0. 59. The system of claim 57, wherein automatically cutting the source material into a plurality of discrete pieces is done without using tags.
0. 60. The system of claim 57, wherein automatically cutting the source material into a plurality of discrete pieces is done without reference to any tags which may be located in the source material.
0. 61. The system of claim 1, wherein the link is a hyperlink.
0. 62. The system of claim 1, wherein the link is an address of the selected one of the plurality of external reference materials.
0. 63. The system of claim 1, wherein the link is reference information for retrieving the selected one of the plurality of external reference materials.
0. 64. The system of claim 1, wherein determining a display address of the selected discrete portion is done without using tags.
0. 65. The method of claim 1, wherein determining a display address of the selected discrete portion is done without reference to any tags which may be located in the source material.
0. 66. The system of claim 1, wherein determining a display address of the selected discrete portion is done without reference to any hierarchical information which may be located in the source material.
0. 67. The system of claim 1, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without using tags.
0. 68. The system of claim 1, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without reference to any tags which may be located in the source material.
0. 69. The system of claim 1, wherein converting the display address of the selected discrete portion to an offset value from the beginning position address is done without reference to any hierarchical information which may be located in the source material.
0. 70. The system of claim 1, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without using tags.
0. 71. The system of claim 1, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without reference to any tags which may be located in the source material.
0. 72. The system of claim 1, wherein comparing the offset value with the starting and ending point addresses recorded in the look-up table to identify one of the plurality of discrete pieces is done without reference to any hierarchical information which may be located in the source material.
0. 73. The system of claim 1, wherein retrieving the selected one of the plurality of external reference materials is done using a hyperlink.
0. 74. The system of claim 1, wherein retrieving the selected one of the plurality of external reference materials is done using an address of the at least one external reference.
0. 75. The system of claim 1, wherein the source material is text based source material.
0. 76. The system of claim 1, wherein the source material is image based source material.
0. 77. The system of claim 1, wherein the source material is graphic based source material.
0. 78. The system of claim 1, wherein the source material is audio based source material.
0. 79. The system of claim 1, wherein the source material is video based source material.
0. 80. The system of claim 1, wherein the source material is a combination of two or more of text based source material, image based source material, graphic based source material, audio based source material, and video based source material.
0. 81. The system of claim 1, wherein the plurality of external reference materials comprises a plurality of text based external reference materials.
0. 82. The system of claim 1, wherein the plurality of external reference materials comprises a plurality of image based external reference materials.
0. 83. The system of claim 1, wherein the plurality of external reference materials comprises a plurality of graphic based external reference materials.
0. 84. The system of claim 1, wherein the plurality of external reference materials comprises a plurality of audio based external reference materials.
0. 85. The system of claim 1, wherein the plurality of external reference materials comprises a plurality of video based external reference materials.
0. 86. The system of claim 1, wherein at least one of the plurality of external reference materials is a combination of two or more of text based external reference material, image based external reference material, graphic based external reference material, audio based external reference material, and video based external reference material.
0. 87. The system of claim 1, wherein linking at least one of the plurality of discrete pieces is done manually.
0. 88. The system of claim 1, wherein linking at least one of the plurality of discrete pieces is done automatically.
0. 89. The system of claim 1, wherein the electronic database is an electronic relational database.
0. 90. The system of claim 1, wherein the electronic database is an electronic file.
0. 91. The system of claim 1, wherein the electronic database is electronic text.
0. 92. The system of claim 1, wherein the beginning position address is a beginning location of the source material in the electronic database.
0. 93. The system of claim 92, wherein each starting point address is a starting location of at least one of the plurality of discrete pieces based upon the beginning location of the source material.
0. 94. The method of claim 92, wherein each ending point address is an ending location of at least one of the plurality of discrete pieces based upon the beginning location of the source material.

This is a continuation of application Ser. No. 08/197,157 filed Feb. 16, 1994 now abandoned.

1. Technical Field

The present invention relates to indexing displayed elements. More particularly, the present invention relates to a novel indexing scheme that is useful in such applications as learning a foreign language, for example a language based upon an ideographic alphabet, such as Japanese.

2. Description of the Prior Art

As the global economy turns the world's many nations into what media viscosity Marshall McLuhan referred to as a global village, the need to learn and use new or specialized information, such as a language other than one's native language, becomes increasingly important. For example, there is a tremendous international demand for information related to Japan. Inside Japan, there is an abundance of information available in the Japanese language in numerous media forms. Japan has five national newspapers, seven major broadcasting networks, and several hundred book and magazine publishers. Japanese television focuses on the most obscure topics; and there are special interest magazines covering the full spectrum of Japanese society. Speakers of the Japanese language can find information on just about any topic imaginable. Unfortunately, outside of Japan this information is in short supply and the information that is available is primarily in English.

Individuals trying to learn about Japan are faced with the dilemma of either relying on English language sources or going through the pains of learning Japanese. English language information on Japan must go through the translation process. This results in time delays in obtaining necessary information, as well as in distortions in meaning. Furthermore, economics itself places restrictions on what information makes it's way into English and what information does not. For general and introductory information on Japan, the English-based media is providing a valuable service. But for people who want to do more than scratch the surface, such information is far from sufficient.

A large number of non-native speakers have sought to study Japanese in universities or in professional language schools. In recent years, the interest level in Japanese among first year level college students has soared, such that it is rated second only to Spanish in some surveys. The number of people studying Japanese in the mid-1980s in the United States was 50,000. This number has recently grown to 400,000 persons. But the study of Japanese language is plagued by the burdens of learning Kanji, the ideographic alphabet in which Japanese is written. Thus, the standing room only first-year Japanese language class in many universities soon becomes the almost private lesson-like third year class due to student attrition resulting from the difficulty of mastering Kanji.

The situation in Japan for foreigners is not much more encouraging. The cost of living in Japan poses a major barrier for both business people and students. There are currently over 300,000 United States citizens working or studying in Japan. But in recent years, foreign companies have been cutting their foreign staff. This, in part, has been in response to the enormous expense associated with maintaining them in Japan; but it is also a statement about the effectiveness of a large percentage of these people, who typically possess no Japanese language skills or background. Nevertheless, the necessity to do business in Japan is clear to most major United States companies, and access to Japan's inside information is critical to the success of such companies.

The situation in Japanese universities is also discouraging. These are currently about 30,000 foreign students in Japanese universities, compared to a total of over 300,000 foreign students studying in the United States. Ninety percent of the foreign students in Japan are from Asia, while these are less than 1,000 students in Japan from the United States. The cost of living and housing again contribute greatly to this disparity, but the language barrier must be seen as the prime hurdle that causes students to abandon the attempt to explore Japan. In the future, the desirability for students and researchers to work in Japan should increase due to the growth of “science cities” and the increase in the hiring of foreign researchers by Japanese corporations. The burden of studying Japanese, however, remains.

In total there are over 60,000 people enrolled in Japanese language programs in Japan; and according to the Japan Foundation, there are approximately 1,000,000 Japanese language students worldwide, with a total of over 8,200 Japanese language instructors in 4,000 institutes. However, without a more effective and productive methodology for reading Japanese and for building Japanese language vocabulary, the level and breadth of the information making its way to non-natives should not be expected to improve.

The foregoing is but one example of the many difficulties one is faced with when acquiring or using difficult or unfamiliar material. The first challenge anyone reading a difficult text, is faced with is the issue of character recognition and pronunciation. For example, a student of the Japanese language spends many frustrating hours counting character strokes and looking up characters in a dictionary. Challenges such as this are the primary reason so many people give up on Japanese after a short trial period. It is also the reason that people who continue to pursue the language are unable to build an effective vocabulary.

Knowing the “yomi” or pronunciation or reading of a word is essential to memorize and assimilate the word into one's vocabulary. This allows the student to read a word in context and often times deduce its meaning. But in many cases, the word may be entirely new to the reader, or it may be a usage that the reader has never seen. Looking up the word in the dictionary or asking a native speaker are the only options available to a student. Once the yomi for the word is known, i.e. the meaning and understanding of the word in context, the final challenge is to memorize the word and make it a part of a usable vocabulary.

The sheer number of characters in ideographic alphabets, such as Kanji, presents unique challenges for specifying and identifying individual characters.

Various schemes have been proposed and descriptions can be found in the literature for the entry of Kanji characters into computers and the like.

See, for example, Y. Chu, Chinese/Kanji Text and Data Processing, IEEE Computer (January 1985); J. Becker, Typing Chinese, Japanese, and Korean, IEEE Computer (January 1985); R. Matsuda, Processing Information in Japanese, IEEE Computer (January 1985); R. Walters, Design of a Bitmapped Multilingual Workstation, IEEE Computer (February 1990); and J. Huang, The Input and Output of Chinese and Japanese Characters, IEEE Computer (January 1985).

And, see J. Monroe, S. Roberts, T. Knoche, Method and Apparatus for Processing Ideographic Characters, U.S. Pat. No. 4,829,583 (9 May 1989), in which a specific sequence of strokes is entered into a 9×9 matrix, referred to as a training square. This sequence is matched to a set of possible corresponding ideographs. Because the matrix senses stroke starting point and stroke sequences based on the correct writing of the ideograph to be identified, this system cannot be used effectively until one has mastered the writing of the ideographic script. See, also G. Kostopoulos, Composite Character Generator, U.S. Pat. No. 4,670,841 (2 Jun. 1987); A. Carmon, Method and Apparatus For Selecting, Storing and Displaying Chinese Script Characters, U.S. Pat. No. 4,937,745 (26 Jun. 1990); and R. Thomas, H. Stohr, Symbol Definition Apparatus, U.S. Pat. No. 5,187,480 (16 Feb. 1993).

A text revision system is disclosed in R. Sakai, N. Kitajima, C. Oshima, Document Revising System For Use With Document Reading and Translation System, U.S. Pat. No. 5,222,160 (22 June 1993), in which a foreigner having little knowledge of Japanese can revise misrecognized imaged characters during translation of the document from Japanese to another language. However, the system is provided for commercial translation services and not intended to educate a user in the understanding or meaning of the text.

Thus, although much attention has been paid, for example, to the writing, identification, and manipulation of ideographic characters, none of these approaches are concerned with providing a language learning system. The state of the art for ideographic languages, such as Japanese, does not provide an approach to learning the language that meets the four primary challenges discussed above, i.e. reading the language (for example, where an ideographic alphabet is used), comprehending the meaning of a particular word encountered while reading the language, understanding the true meaning of the word within the context that the word is used, and including the word in a personal dictionary to promote long term retention of the meaning of the word. A system that applies this approach to learning a language would be a significant advance in bridging the gap between the world's diverse cultures because of the increased understanding that would result from an improved ability to communicate with one another. Such system would only be truly useful if it were based upon an indexing scheme that allowed meaningful manipulation and display of the various elements of the language.

The invention provides a unique system for indexing displayed elements and finds ready application, for example in a language learning system that enhances and improves the way non-natives read foreign languages, for example the way a native English speaker reads Japanese. The language learning system provides a more effective way for people to read and improve their command of the foreign language, while at the same time communicating insightful and relevant cultural, social, and economic information about the country.

The learning model used by the language learning system is straightforward and is based upon methods that are familiar to most learners of foreign languages. The system addresses the four challenges of reading a foreign language, such as Japanese: i.e. reading the foreign word or character, such as Kanji in the case of a language having an ideographic alphabet, such as Japanese; comprehending the meaning of the word; understanding the word in context; and including the word in a personal vocabulary.

The exemplary embodiment of the invention includes one or more foreign language books that are read on an electronic display of a personal computer. English word references are available for each word in such books. The definitions of such words are derived from well known foreign language dictionaries. With regard to the Japanese language, the system saves significant amounts of time and effort by eliminating the need for the user to look up Japanese characters in a Kanji dictionary.

When one uses the system, the pronunciations or readings (‘yomi’) for all words are immediately viewable in a pop-up window without accessing a disk based database, for example by clicking a mouse on a selected word or phrase. In the same pop-up window, the system provides an English reference to any word that is also selected by clicking on the selected word or phrase. The system provides extensive notes for difficult phrases and words not normally found in a dictionary, and includes a relational database designed for managing and studying words. This allows a user to build a personal database of words that he would like to master. Words may also be entered from other sources that are currently in paper or other electronic features. A unique indexing scheme allows word-by-word access to any of several external multi-media references.

FIG. 1 is a block schematic diagram of a language learning system according to the invention;

FIG. 2 is a flow diagram in which the mechanism for indexing and linking text to external references is shown according to the invention;

FIG. 3 is a screen display showing a highlighted Japanese word and a pop-up menu, including an English reference to the Japanese word, according to the invention;

FIG. 4 is a screen display showing a highlighted Japanese word and a pop-up menu, including Japanese language annotations of the Japanese word, according to the invention; and

FIG. 5 is a screen display showing a Japanese word listed in a personal dictionary, as well as a word control palette, according to the invention.

The invention provides a system that is designed to enhance and improve the way one reads or learns to read a difficult text, such as a foreign language, especially a language based upon an ideographic alphabet, such as Kanji which is used in the Japanese language. The text may be any of actual text based material, or audio, video, or graphic based information. In the language learning application, the system is modeled on the process by which the foreign language is read and addresses the problems most persons face when reading a language that is different from their own.

The exemplary embodiment of the invention is based upon two powerful functional modules that provide a comprehensive approach to reading and learning a foreign language, such as Japanese. The first module is an electronic viewer that gives the user access to reference information on each word in the electronic text at a word by word level. The second module is a relational database that allows a user to create word lists which practically no limit in size. The two modules are integrated to provide the user with everything needed to read the foreign language quickly and enjoyably, as well as to build their own individual vocabulary.

FIG. 1 is a block schematic diagram of an exemplary embodiment of the invention that implements a language learning system. An electronic book and/or a multi-media source material is provided as a teaching resource. A text file 10 and/or a multimedia source 14, consisting of an audio/video filter 11 and synchronized text 13, which may include sound, images, and/or video is edited during construction of a linked text database by a visual editor 19 that used to build a wordified database 20. The database 20 sources a grammar parser 23 and a link engine 22 that builds an index 21 which, in turn, locates each textual and audio/video reference in the source material. The index provides a location for each reference in a database 12 that includes a relational database engine 15, and linkable entities, such as text references 16, audio references 17, graphic references 18, and the like.

The link engine 22 outputs the selected text to a word list 28 derived from the input text file 10 and/or audio/video information 14, and also outputs the reference information 24, consisting of linkable entities 25, 26, 27, which are derived from the indexed database 12. The indexor/viewer 29 creates a multi-media resource 30, such as a file 33 that was processed as described above to produce a data resource 34, an offset index 35, and linked entities 36 to the data resource for access by the user.

A user interface 32 to the system includes an electronic viewer 43 that runs along with the system application program 42 and provides the following functional elements: index management 37, user display 38, a table of contents 39, a pop-up display 40, and a personal dictionary 41.

The electronic viewer module is used to view and read the electronic books provided with the language learning system. The module includes the following features:

The personal dictionary is a relational database that is optimized to manage and study words. Unlike electronic dictionaries, where only the word entries of the dictionary are searchable, the personal dictionary of the system herein allows one to search on each of eight or more keys associated with a word.

The following functions are supported by the personal dictionary:

The personal dictionary also allows the user to enter words from other sources that are currently in paper or other electronic formats. For example, a user can copy all the words that they have in paper format from study lists and notes. With this feature, a student can have all of his study materials in one easy to access database. Users can also import and export data in a text format supported by standard word processor and spreadsheet programs.

The exemplary personal dictionary includes a base 500-word vocabulary list designed for the beginning student. A variety of words are included related to such general topics as: foods and drink, family, health, the body, commuting and transportation, environment, economics, finance, politics, companies, industries, computers, sports, and the language itself.

The system includes one or more electronic books. The words in each book is fully supported with readings, English references, and hypernotes. In the exemplary embodiment of the invention there are typically over 10,000 words, as well as over 1,000 notes presented in an easy to read, easy to memorize format.

The English reference feature of the system provides basic information to help users understand the word in its context. For each word, a generalized definition of the word is provided. The pop-up fields are used to give the user a quick reference to the word and to allow the user to continue reading or reviewing the text.

Current electronic book formats provide simple hyperlinks in what is termed hypertext or multimedia. Hyperlinks to date have been simple pointers that directly link text with other text, graphics, or sound within the text file itself. For reference materials, such as electronic encyclopedias, and dictionaries, hyperlinks provide a quick and easy way to find related material to a topic or subject. However, these links must be hard coded and are therefore cumbersome to author. The format of the system herein described provides a new means of relating text, pictures, and/or video with information to enrich and expand the impact of every element in a text, picture, or video. This format differs from current electronic books which only link text with other parts of text or content.

In the new format of the present system, every word or sound, for example, can be linked to information not contained within the text using an indexing method that maps a single word or phrase to a table that contains external reference numeral. This reference can be in the form . The Main Control Buttons are located just below the Word field. The arrow keys display the next or previous words based on the sort key indicated by the Sort Button in the bottom left corner. The Show Notes button displays the Note information about the Word. This button toggles to Hide Notes when the field is displayed and Show Notes when hidden. Additional notes and annotations can be entered directly. The Quick Search button displays the word in a pop-window for quick search of a single character. After the pop-up is displayed, the user can click on the desired character to search. The Flash Words button displays the words in the personal dictionary in slide show fashion. Sort order or random order are selectable: sort order uses the current sort order.

The Find button displays the search dialogue window. Words are searchable by the following keys: Word, Yomi, English Reference, Category, Source, Priority, or Date. The personal dictionary supports logical “AND” searching for each of the above keys. The following features are supported:

Both the electronic viewer module and the personal dictionary module provide search features accessible via the Word Menu. After selecting Find from the menu, the search dialogue appears.

The electronic viewer module includes a simple search feature that allows the user to search for a string of text anywhere in the book. The user enters the desired text and clicks Find to execute the Search. Find Next searches for the next occurrence of the word in the text.

In the personal dictionary, a slightly more complex search feature is provided. The search dialogue allows the user to enter multiple search terms. For example, a user can search for a certain term in the ‘Economics’ category or the user can look for a Kanjitaht has a certain reading. More search terms result in increased search time. The search term for Word, Yomi, Reference, Note, and Source are indexed based on the first seven characters contained in the field. Characters appearing after the seventh character in any of these fields are not found with the ‘Starts With’ selection. Selecting ‘Contains’ searches the entire text in the field.

To search, the user enters the desired word or character and then selects ‘Starts With’ or ‘Contains’ from the menu. A ‘Starts With’ search is the fastest. The ‘Category’ search terms are based on the category list. The integers 1 to 5 can be entered for ‘Priority.’ Date searching can be performed as ‘is’, ‘is before’, or ‘is after.’ After entering the desired search information, the user clicks ‘Find’ to execute the Search. Find Next searches for the next occurrence in the personal dictionary.

Importing/Exporting Word Lists

Text files can be read into the personal dictionary to make data exchange with other programs and colleagues feasible. The following format should be followed to allow accurate importing. One may use a spreadsheet program to build the word table and export the information as a tab delimited file. If a word processor is used, the user must add an extra tab for blank fields and follow the format listed below. In the exemplary embodiment of the invention, Export and Import uses the following format:

Word [TAB];
Pronunciation [TAB”;
Meaning [TAB];
Notes [TAB];
Category [TAB];
Source [TAB];
Priority [TAB]; and
Date [Hard Return].

Setting up the Word field as column A in a spreadsheet and then exporting as a text file results in this format. If a word processor is used, one should also save as a text file. One should not include any hard returns (user entered returns) within the string of text for the given word. If given the option, the user should elect to have soft returns (automatically entered returns) deleted. To import, the user selects Import Words from the File Menu, and then chooses the file for import. To export, the user selects Export Words from the File Menu, and then enters a name for the given file.

Although the invention is described herein with reference to the preferred embodiment, one skilled in the art will readily appreciate that other applications may be substituted for those set forth herein without departing from the spirit and scope of the present invention. For example, the invention may be used to index images such that elements of the image are linked to an external reference. Thus, an illustration of the human body may include descriptive external resources for each of the body's internal organs, and would thereby aid in the study of anatomy. Likewise, a video or other moving picture display, for example animated displays, could be indexed such that the picture could be stopped and various elements within a frame of the picture could be examined through links to external references. The invention allows such application because it does not embed information within the source material as is the case with prior art hyperlink technology. Rather, the invention creates a physical counterpart to the image in which a selected image position defines an offset that locates a desired external reference. Accordingly, the invention should only be limited by the claims included below.

Yamanaka, Brian, Bookman, Mark

Patent Priority Assignee Title
10073846, Jul 04 2011 NHN Corporation System and method for linking web documents
10296543, Aug 16 2001 Sentius International, LLC Automated creation and delivery of database content
7672985, Aug 16 2001 Sentius International, LLC Automated creation and delivery of database content
8204896, Jan 08 2008 Kabushiki Kaisha Toshiba; Toshiba Tec Kabushiki Kaisha Image processing apparatus and image processing method
8214349, Aug 16 2001 Sentius International LLC Automated creation and delivery of database content
8219386, Jan 21 2009 KING FAHD UNIVERSITY OF PETROLEUM AND MINERALS Arabic poetry meter identification system and method
9165055, Aug 16 2001 Sentius International, LLC Automated creation and delivery of database content
RE43633, Feb 16 1994 Sentius International, LLC System and method for linking streams of multimedia data to reference material for display
RE45085, Feb 16 1994 Sentius International, LLC System and method for linking streams of multimedia data to reference material for display
Patent Priority Assignee Title
4742481, Apr 13 1984 Brother Kogyo Kabushiki Kaisha Electronic dictionary having means for linking two or more different groups of vocabulary entries in a closed loop
4914586, Nov 06 1987 Xerox Corporation; XEROX CORPORATION, STAMFORD, COUNTY OF FAIRFIELD, CONNECTICUT, A CORP OF NY Garbage collector for hypermedia systems
4982344, May 18 1988 Xerox Corporation Accelerating link creation
5146552, Feb 28 1990 INTERNATIONAL BUSINESS MACHINES CORPORATION, A CORP OF NEW YORK Method for associating annotation with electronically published material
5151857, Dec 18 1989 Fujitsu Limited Dictionary linked text base apparatus
5157606, Mar 13 1989 Fujitsu Limited System for translation of source language data into multiple target language data including means to prevent premature termination of processing
5204947, Oct 31 1990 International Business Machines Corporation Application independent (open) hypermedia enablement services
5214583, Nov 22 1988 Kabushiki Kaisha Toshiba Machine language translation system which produces consistent translated words
5222160, Dec 28 1989 PFU Limited Document revising system for use with document reading and translating system
5241671, Oct 26 1989 ENCYCLOPAEDIA BRITANNICA, INC Multimedia search system using a plurality of entry path means which indicate interrelatedness of information
5256067, Apr 25 1990 Device and method for optimal reading vocabulary development
5289376, Nov 20 1989 Ricoh Company, Ltd. Apparatus for displaying dictionary information in dictionary and apparatus for editing the dictionary by using the above apparatus
5303151, Feb 26 1993 Microsoft Technology Licensing, LLC Method and system for translating documents using translation handles
5329446, Jan 19 1990 Sharp Kabushiki Kaisha Translation machine
5349368, Oct 24 1986 Kabushiki Kaisha Toshiba Machine translation method and apparatus
5367621, Sep 06 1991 INTERNATIONAL BUSINESS MACHINES CORPORATION A CORP OF NEW YORK Data processing method to provide a generalized link from a reference point in an on-line book to an arbitrary multimedia object which can be dynamically updated
5404435, Jul 29 1991 International Business Machines Corporation Non-text object storage and retrieval
5517409, Mar 24 1992 Ricoh Company, Ltd. Image forming apparatus and method having efficient translation function
5537132, Mar 30 1990 Hitachi, Ltd. Method of information reference for hypermedia
5564046, Feb 27 1991 Canon Kabushiki Kaisha Method and system for creating a database by dividing text data into nodes which can be corrected
5583761, Oct 13 1993 LINGOCOM, LTD Method for automatic displaying program presentations in different languages
5617488, Feb 01 1995 Research Foundation of State University of New York, The Relaxation word recognizer
5724593, Jun 07 1995 LIONBRIDGE US, INC Machine assisted translation tools
5754847, May 26 1987 Xerox Corporation Word/number and number/word mapping
5787386, Feb 11 1992 Xerox Corporation Compact encoding of multi-lingual translation dictionaries
5799267, Jul 22 1994 Phonic engine
5802559, May 20 1994 Advanced Micro Devices, Inc. Mechanism for writing back selected doublewords of cached dirty data in an integrated processor
5822720, Feb 16 1994 Sentius International, LLC; Sentius International LLC System amd method for linking streams of multimedia data for reference material for display
5845238, Jun 18 1996 Apple Computer, Inc. System and method for using a correspondence table to compress a pronunciation guide
5870702, May 25 1995 NEC Corporation Word converting apparatus utilizing general dictionary and cooccurence dictionary to display prioritized candidate words
5884247, Oct 31 1996 WORDSTREAM, INC Method and apparatus for automated language translation
5983171, Jan 11 1996 Hitachi, Ltd. Auto-index method for electronic document files and recording medium utilizing a word/phrase analytical program
5987403, May 29 1996 Panasonic Intellectual Property Corporation of America Document conversion apparatus for carrying out a natural conversion
6022222, Jan 03 1994 Mary Beth, Guinan Icon language teaching system
6026398, Oct 16 1997 iMarket, Incorporated System and methods for searching and matching databases
6047252, Jun 28 1996 Kabushiki Kaisha Toshiba Machine translation method and source/target text display method
6061675, May 31 1995 Oracle International Corporation Methods and apparatus for classifying terminology utilizing a knowledge catalog
6092074, Feb 10 1998 General Electric Company Dynamic insertion and updating of hypertext links for internet servers
6122647, May 19 1998 AT HOME BONDHOLDERS LIQUIDATING TRUST Dynamic generation of contextual links in hypertext documents
6128635, May 13 1996 JOLLY SEVEN, SERIES 70 OF ALLIED SECURITY TRUST I Document display system and electronic dictionary
6373502, Jul 01 1996 Oracle America, Inc Method and apparatus for facilitating popup links in a hypertext-enabled computer system
EP725353,
JP3174653,
JP4220768,
JP4288674,
JP4320530,
JP4320551,
JP5012096,
JP5128157,
WO9504974,
///////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Feb 11 1994BOOKMAN, MARCMEDIUS CORPORATIONASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0297820302 pdf
Feb 11 1994YAMANAKA, BRIANMEDIUS CORPORATIONASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0297820302 pdf
Aug 13 1996MEDIUS CORPORATIONSentius CorporationCHANGE OF NAME SEE DOCUMENT FOR DETAILS 0297820764 pdf
Mar 01 2004Sentius CorporationSentius International CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0297820866 pdf
Feb 24 2005Sentius International Corporation(assignment on the face of the patent)
Jan 01 2010Sentius International CorporationSentius International, LLCCHANGE OF NAME SEE DOCUMENT FOR DETAILS 0297830761 pdf
Jan 01 2010Sentius International, LLCSentius International, LLCCHANGE OF ADDRESS0298670618 pdf
Date Maintenance Fee Events
Apr 13 2010M2553: Payment of Maintenance Fee, 12th Yr, Small Entity.


Date Maintenance Schedule
Jun 09 20124 years fee payment window open
Dec 09 20126 months grace period start (w surcharge)
Jun 09 2013patent expiry (for year 4)
Jun 09 20152 years to revive unintentionally abandoned end. (for year 4)
Jun 09 20168 years fee payment window open
Dec 09 20166 months grace period start (w surcharge)
Jun 09 2017patent expiry (for year 8)
Jun 09 20192 years to revive unintentionally abandoned end. (for year 8)
Jun 09 202012 years fee payment window open
Dec 09 20206 months grace period start (w surcharge)
Jun 09 2021patent expiry (for year 12)
Jun 09 20232 years to revive unintentionally abandoned end. (for year 12)