Provided is a system for providing related content comprising the steps of: sensing a sound output from a content reproducing device for reproducing content (referred to as “main content”); converting the sensed sound into digital data; extracting an acoustic wave id from the digital data; selecting related content corresponding to the acoustic wave id; and displaying the related content in an image and/or speech by a mobile receiver.
|
17. A method for providing a related content in a mobile receiver which comprises a memory, a processor, a microphone to detect a sound outputted from a content playing device which plays a main content, and an adc to convert the sound detected by the microphone into digital data, the method comprising:
extracting a non-audible sound wave id from the digital data;
displaying a related content corresponding to the sound wave id as an image and/or voice;
activating at least one element of elements which performs the extracting and the displaying; and
storing a main content schedule indicating times at which the main content is played,
wherein the sound wave id is digital data indicating a non-audible sound which is artificially inserted into the sound outputted from the content playing device, or an audible fingerprint id indicating a characteristic of a sound which is extracted from a sound included as a part of the sound outputted from the content playing device, and
wherein the activating is performed according to the time at which the main content is played with reference to the main content schedule.
1. A method for providing a related content in a mobile receiver which comprises a memory, a processor, a microphone to detect a sound outputted from a content playing device which plays a main content, and an adc to convert the sound detected by the microphone into digital data, the method comprising:
extracting a non-audible sound wave id from the digital data;
displaying a related content corresponding to the sound wave id as an image and/or voice;
activating at least one element of elements performing the extracting and the displaying; and
storing a sound wave id schedule indicating times at which the sound wave id is extractable,
wherein the sound wave id is digital data indicating a non-audible sound which is artificially inserted into the sound outputted from the content playing device, or an audible fingerprint id indicating a characteristic of a sound which is extracted from a sound included as a part of the sound outputted from the content playing device, and
wherein the activating is performed at the time at which the sound wave id is extractable with reference to the sound wave id schedule.
2. The method of
3. The method of
4. The method of
transmitting the sound wave id to a server; and
receiving the related content corresponding to the sound wave id from the mobile receiver.
5. The method of
determining whether the detected sound has an intensity greater than or equal to a predetermined threshold; and
when it is determined that the sound has the intensity greater than or equal to the predetermined threshold, activating the adc and an element which performs the extracting of the sound wave id, such that the elements operate.
6. The method of
wherein the adc and the element which performs the extracting of the sound wave id are activated simultaneously or in sequence when it is determined that the sound has the intensity greater than or equal to the predetermined threshold.
7. The method of
activating at least one element of elements which performs the extracting and the displaying, according to a schedule or when a predetermined event occurs.
8. The method of
9. The method of
Threshold>Intensity of the detected sound>α×threshold(herein, α is smaller than 1). 10. The method of
11. The method of
12. The method of
13. The method of
14. The method of
when the predetermined event occurs, delivering occurrence of the event to an operating system;
delivering, by the operating system, the occurrence of the predetermined event to filters which are provided to correspond to applications operating in the mobile receiver; and
delivering or refraining from delivering, by the filters, the occurrence of the predetermined event to the applications corresponding thereto according to predetermined settings.
15. The method of
16. The method of
selecting the related content, and
wherein an element which performs the selecting of the related content, and an element which performs the displaying of the related content as the image and/or the voice are activated simultaneously and in sequence when the sound wave id is extracted from the digital data.
18. The method of
19. The method of
|
This application is a national phase application of PCT Application No. PCT/KR2015/000890, filed on 28 Jan. 2015, which claims benefit of Korean Patent Application 10-2014-0059864, filed on 19 May 2014 and Korean Patent Application 10-2014-0028614, filed on 11 Mar. 2014. The entire disclosure of the application identified in this paragraph is incorporated herein by reference.
The present invention relates to a system and a method for providing a related content at low power, which receives a sound included in a content at low power and provides a related content, and a computer readable recording medium having a program recorded therein.
The present invention was supported by National Research and Development Project Business of the Ministry of Science, ICT and Future Planning, as follows:
[National Research and Development Project Business supporting the present invention]
[Project Number] R07581510960001002
[Related Department] Ministry of Science, ICT and Future Planning
[Research Management Specialized Agency] Institute for Information & Communications Technology Promotion
[Research Business Name] ICT emerging technology development (ICT Small and Medium/Venture Enterprises Technology Development Support)
[Research Project Title]
TV-Mobile Interworking Next-generation Customized Advertisement Platform Development using Small Loudness Inaudible Sound Wave Communications
Related-art product advertisement methods include not only a direct advertisement method which directly provides an advertisement through newspaper, radio, and a TV broadcast, but also a method which advertises a product indirectly by applying a product to be advertised to a prop which appears in a content such as a movie, a drama, a radio broadcast, or the like, or a product which is naturally mentioned as a part of the content (hereinafter, a product indirect advertisement or an indirect advertisement). In particular, in the case of the product indirect advertisement, a powerful advertisement effect can be provided to viewers and listeners, and its importance increases. However, the related-art indirect advertisement simply exposures products on a screen or merely mentions as a sound. Therefore, there is a problem that viewers or listeners should separately search for relevant information and retailers on the products, and then purchase the products based on the searched information when they desire to purchase the products. Various technologies for solving this problem have been suggested. For example, Korean Patent Registration No. 10-1310943 (Sep. 11, 2013) (System and Method for Providing Content-Related Information Related to Broadcast Content), Korean Patent Registration No. 10-1313293 (Sep. 24, 2013) (System and Method for Providing Additional Information of Broadcast Content), and Korean Patent Registration No. 10-0893671 (Apr. 9, 2009) (Generation and Matching of Hashes of Multimedia Content) disclose methods for providing a related content on a product indirect advertisement included in a content that a viewer or a listener views or listens to through their mobile receivers, using a sound characteristic of the broadcasted or played content. In another example, Korean Patent Registration No. 10-1363454 (Feb. 10, 2014) (System for Providing User-Customized Advertisement Based On Sound Signal Outputted from TV, Method for Providing User-Customized Advertisement, and Computer Readable Recording Medium Having MIM Service Program Recorded Therein) discloses a method for providing a related content on a product indirect advertisement included in a content that a viewer or a listener views or listens to through their mobile receivers by inserting a non-audible sound wave into the broadcasted or played content.
However, the above-explained technologies still have a problem that the viewer or listener should actively request the related content using their own mobile receivers. That is, when a content is played, the user should execute an application of the mobile receiver to receive a related content, or should instruct the application of the mobile receiver which provides the related content to recognize the content. Requiring the user to make an extra effort to receive the related content is contrary to the behavior pattern of the viewer or the listener who passively enjoys the content, or making the extra effort may interfere with viewer's or listener's immersion. Therefore, the efficiency of the related content may be reduced.
This problem may be overcome by continuously extracting a sound characteristic or a non-audible sound wave in the mobile receiver. However, this may cause power shortage in the mobile receiver which operates by a limited battery.
An exemplary embodiment of the present invention provides a low-power related content providing system which normally operates at low power and can automatically provide a related content on a main content that a viewer or listener is viewing or listening to without requiring the viewer or listener to actively find the related content.
An exemplary embodiment of the present invention provides a low-power related content providing system, which extracts a sound wave ID included in a main content which is broadcasted or played through a content playing device at low power, and can provide a related content on an indirect advertisement product included in the main content without requiring a viewer or listener to make an extra effort through a mobile receiver, or without interfering with user's immersion on the main content.
An exemplary embodiment of the present invention provides a low-power related content providing method which normally operates at low power and can automatically provide a related content on a main content that a viewer or listener is viewing or listening to without requiring the viewer or listener to actively find the related content.
An exemplary embodiment of the present invention provides a method for providing a related content, which extracts a sound wave ID included in a main content which is broadcasted or played through a content playing device at low power, and can provide a related content on an indirect advertisement product included in the content without requiring a viewer or listener to make an extra effort through a mobile receiver, or without interfering with user's immersion on the main content.
An exemplary embodiment of the present invention provides a recording medium having a program recorded therein for executing, through a computer, a low-power related content providing method, which normally operates at low power and can automatically provide a related content on a main content that a viewer or listener is viewing or listening to without requiring the viewer or listener to actively find the related content.
An exemplary embodiment of the present invention provides a recording medium having a program recorded therein for executing, through a computer, a method for providing a related content, which extracts a sound wave ID included in a main content which is broadcasted or played through a content playing device at low power, and can provide a related content on an indirect advertisement product included in the main content without requiring a viewer or listener to make an extra effort through a mobile receiver, or without interfering with user's immersion on the main content.
According to an embodiment of the present invention, there is provided a system and a method for providing a related content at low power, or a computer readable recording medium having a program recorded therein.
According to an exemplary embodiment, there is provided a method for providing a related content, the method including the steps of: detecting a sound which is outputted from a content playing device which reproduces a content (referred to as a “main content”); converting the detected sound into digital data; extracting a sound wave ID from the digital data; selecting a related content corresponding to the sound wave ID; and displaying, by a mobile receiver, the related content as an image and/or voice.
According to an exemplary embodiment, there is provided a mobile receiver for providing a related content, which includes a memory and a processor, the mobile receiver including: a microphone configured to detect a sound which is outputted from a content playing device which reproduces a content (referred to as a “main content”); an ADC configured to convert the sound detected by the microphone into digital data; a sound wave ID extraction unit configured to extract a sound wave ID from the digital data; and a display unit configured to display a related content selected according to the sound wave ID.
According to an exemplary embodiment, there is provided a computer readable medium which has a program recorded therein, for executing a method for providing a related content in a computer, the computer including a memory, a processor, a microphone which detects a sound outputted from a content playing device which produces a content (referred to as a “main content”), and an ADC which converts the sound detected by the microphone into digital data, wherein the method for providing the related content includes the steps of: extracting a sound wave ID from the digital data; and displaying a related content which is selected according to the sound wave ID through a display unit provided in the computer.
According to one or more embodiments of the present invention, a sound wave ID included in a main content which is broadcasted or played through a content playing device is extracted at low power, and a related content on an indirect advertisement product included in the main content can be provided without requiring a viewer or listener to make an extra effort through a mobile receiver, or without interfering with user's immersion on the main content.
Exemplary embodiments will now be described more fully with reference to the accompanying drawings to clarify aspects, other aspects, features and advantages of the present invention. The exemplary embodiments may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, the exemplary embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those of ordinary skill in the art.
The terms used herein are for the purpose of describing particular exemplary embodiments only and are not intended to be limiting. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, do not preclude the presence or addition of one or more other components.
Hereinafter, exemplary embodiments will be described in greater detail with reference to the accompanying drawings. The matters defined in the description, such as detailed construction and elements, are provided to assist in a comprehensive understanding of the exemplary embodiments. However, it is apparent that the exemplary embodiments can be carried out by those of ordinary skill in the art without those specifically defined matters. In the description of the exemplary embodiment, certain detailed explanations of related art are omitted when it is deemed that they may unnecessarily obscure the essence of the present invention.
Related Content Providing System
Referring to
For example, the mobile receiver 200 may provide a related content including an advertisement related to at least one of an image and a sound outputted as the main content. For example, the mobile receiver 200 may provide a related content regarding an indirect advertisement product which is displayed as an image or mentioned as a sound in the main content. According to an exemplary embodiment, when there are a plurality of indirect advertisement products displayed as images or mentioned as sounds in the main content, the mobile receiver 200 may provide related contents regarding the respective products.
The related content providing system according to an exemplary embodiment may include the content playing device 100, the mobile receiver 200, and a server 300.
The content playing device 100 may output an image and/or a sound. The mobile receiver 200 may extract at least one sound wave ID from the sound outputted from the content playing device 100. For example, the sound wave ID may match the related content through a related content database (DB) (i.e., a DB in which sound wave IDs and related contents match each other).
For example, the sound wave ID may be used to identify a main content, identify a relative time location in the main content, identify a related content related to (corresponding to) an indirect advertisement in the main content, or identify a related content related to (corresponding to) an indirect advertisement at the relative time location in the main content.
The content playing device 100 may include a speaker or a sound output terminal for playing a sound, and further include a monitor or an image output terminal for displaying an image, like a TV, a radio, a desktop computer, and a mobile device (for example, a laptop computer, a tablet computer, a smart phone, a wearable device, or the like).
The mobile receiver 200 is a mobile device which extracts a sound wave ID included in a sound of a main content outputted from the content playing device 100, and provides a related content corresponding to the sound wave ID to a viewer or a listener of the main content in such a form that the related content is recognized with eyes or ears. In this case, the viewer or listener may be a person who owns the mobile receiver. For example, the mobile receiver 200 may include a mobile device such as a laptop computer, a tablet computer, a smart phone, a wearable device, or the like. In addition, the mobile receiver 200 may include a microphone (not shown) to receive the sound wave ID and an ADC (not shown).
In an exemplary embodiment, when the sound wave ID is extracted, the mobile receiver 200 transmits the sound wave ID to the server 300, and receives the related content corresponding to the sound wave ID from the server 300 and displays the related content as a sound or an image. For example, the mobile receiver 200 may display the related content corresponding to the sound wave ID for the user in the form of a notification message, an image, and/or a voice.
In an exemplary embodiment, the mobile receiver 200 is a device which has mobility like a mobile device such as a laptop computer, a tablet computer, a wearable computer, a smart phone, or the like, and may include a microphone (not shown) to recognize a sound, a communication unit (not shown) to communicate with the server, a computer processor (not shown), a memory (not shown), and a program (not shown). Herein, the program may be loaded into the memory under the control of the computer processor to perform overall operations (for example, operations of recognizing a sound wave ID, transmitting the recognized sound wave ID to the server 300, and displaying a related content received from the server 300 in the form of a notification message, an image, and/or a voice).
Next, description on how a sound wave ID is included in a content will be provided.
A technique of inserting a sound wave ID into a main content and extracting the sound wave ID may use technologies disclosed in the following patent documents.
For example, a technique of inserting an ID into a sound and a technique of extracting the content ID from the sound wave may use technology disclosed in Korean Patent Application No. 10-2012-0038120, filed on Apr. 12, 2012, by the same inventor with the KIPO (titled “Method and System for Estimating Location of Mobile Terminal Using Sound System, and Sound System Using the Same”).
In another example, technology disclosed in Korean Patent Application No. 10-2012-0078410, filed on Jul. 18, 2012, by the inventor of this application with the KIPO (titled “Method and System for Collecting Proximity Data”) (the features of including an ID in a sound signal and recognizing and extracting the ID included in the sound signal) may be used.
In another example, technology disclosed in Korean Patent Application No. 10-2012-0053286, filed on May 18, 2012, by the inventor of this application with the KIPO (titled “System for Identifying Speaker and Location Estimation System Using the Same”) (the features of including an ID in a sound signal and recognizing and extracting the ID included in the sound signal) may be used.
In another example, technology disclosed in Korean Patent Application No. 10-2012-0078446, filed on Jul. 18, 2012, by the inventor of this application with the KIPO (titled “Method and Apparatus for Calculating Intimacy Between Users Using Proximity Information”) (the features of including an ID in a sound signal and recognizing and extracting the ID included in the sound signal) may be used.
In another example, technology disclosed in Korean Patent Application No. 10-2013-0107604, filed on Sep. 6, 2013, by the inventor of this application with the KIPO (titled “Method for Transmitting and Receiving Sound Wave Using Time-Varying Frequency Based-Symbol, and Apparatus Using the Same”) (the features of including digital information in a sound signal and recognizing and extracting the digital information included in the sound signal) may be used.
The technologies disclosed in the specifications of the above-mentioned Patent Application Nos. 10-2012-0038120, 10-2012-0078410, 10-2012-0053286, 10-2012-0078446, and 10-2013-0107604 are incorporated into the specification of the present application and are dealt with as a part of the specification of the present application.
A technique of extracting an audible fingerprint sound wave ID corresponding to a characteristic of a sound included in a main content may use technologies disclosed in the following patent documents.
For example, technologies disclosed in Korean Patent Registration Nos. 10-1310943 (Sep. 11, 2013) (titled “System and Method for Providing Content Related Information Related to Broadcast Content”), 10-1313293 (Sep. 24, 2013) (titled “System for Providing Additional Information of Broadcast Content and Method Thereof”), and 10-0893671 (Apr. 9, 2009) (titled “Generation and Matching of Hashes of Multimedia Content) may be used.
The technologies disclosed in the specifications of the above-mentioned Korean Patent Registration Nos. 10-1310943, 10-1313293, and 10-0893671 are incorporated into the specification of the present application and are dealt with as a part of the specification of the present application.
The server 300 may receive the sound wave ID from the mobile receiver 200, and select a related content corresponding to the sound wave ID with reference to a related content DB (not shown) in which sound wave IDs match related contents, and transmits the selected related content to the mobile receiver 200.
In an exemplary embodiment, the server 200 may be an advertisement server which stores and manages advertisements, or may be a Multimedia Instant Messenger (MIM) service server which transmits a fingerprint and/or multimedia data to friends belonging to the same group.
When the server 200 is implemented by using the MIM service server, the server 200 may transmit the related content to a client program which interacts with the MIM service server.
Mobile Receiver for Extracting a Sound Wave ID
Referring to
According to an exemplary embodiment, the microphone 201 may detect a sound outputted from the content playing device 100. That is, the microphone 201 may convert the sound into an electric signal.
According to an exemplary embodiment, the ADC 203 may convert the sound which is an analogue sound wave signal detected by the microphone 201 into digital data. In an exemplary embodiment, the ADC 203 may convert a sound wave signal within a predetermined frequency range or a maximum frequency range allowable by hardware, which is detected by the microphone, into digital data.
According to an alternative embodiment, the ADC 203 may convert the sound into the digital data only when the sound has an intensity greater than or equal to a predetermined threshold. In addition, the ADC 203 may convert the sound within the predetermined frequency range into the digital data only when the sound has an intensity greater than or equal to a predetermined threshold.
For example, the threshold may be a relative value such as a Signal to Noise Ratio (SNR) or an absolute value such as an intensity of a sound.
The processor 207 is a device which interprets a computer command and executes the command, and may be a central processing unit (CPU) of a general computer. The processor 207 may be an application processor (AP) in the case of a mobile device such as a smart phone, and may execute a program loaded into the memory 209.
The memory 209 may be implemented by using a volatile storage device such as a random access memory (RAM), and an operating system (OS) necessary for driving the mobile device 200, an application, data or the like may be loaded into the memory 209. The storage device 211 is a non-volatile storage device which stores data, and, for example, may be implemented by using a hard disk drive (HDD), a flash memory, a secure digital (SD) memory card, or the like.
The display unit 213 may display a content for the user as a voice, an image, and/or a fingerprint. For example, the display unit 213 may be implemented by using a display and a sound output terminal or a speaker connected thereto.
The communication unit 215 may be implemented by using a device for communicating with the outside. For example, the communication unit 215 may communicate with the server 100.
According to an exemplary embodiment, a related content providing application 223 may be loaded into the memory 209 and operated under the control of the processor 207. The related content providing application 223 may receive a sound using the microphone 201 and the ADC 203, and, when a sound wave ID is successfully extracted from the sound, transmits the extracted sound wave ID to the server 300 through the communication unit 215.
According to an exemplary embodiment, the related content providing application 223 may include, as an additional function, a function of determining whether the sound detected by the ADC 203 has an intensity greater than or equal to a threshold, and extracting a sound wave ID from the sound only when the sound has the intensity greater than or equal to the predetermined threshold value.
According to another exemplary embodiment, the related content providing application 223 may include, as an additional function, a function of determining whether the sound detected by the ADC 203 has an intensity greater than or equal to a threshold value within a predetermined frequency range, and extracting a sound wave ID from the detected sound only when the sound has the intensity greater than or equal to the predetermined threshold within the predetermined frequency range.
According to another exemplary embodiment, the operation of detecting and extracting the sound wave ID may be implemented by separate hardware rather than the related content providing application 223. That is, a sound wave ID extraction unit 205, which is illustrated by a dashed line in
According to an exemplary embodiment of the present invention, at least one of the microphone 201, the ADC 203, the sound wave ID extraction unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be activated to perform their respective operations after being in an inactivation state according to a schedule or when a predetermined event occurs.
For example, the microphone 201 and the ADC 203 may be activated to perform their respective operations according to a schedule or when a predetermined event occurs. In this example, the microphone 201 and the ADC 203 are elements which are activated after being in an inactivation state according to a schedule or when a predetermined event occurs.
For example, when the related content providing application 223 is implemented to have the function of determining whether the sound detected by the microphone 201 has the intensity greater than or equal to the threshold within the predetermined frequency range, the related content providing application 223 is activated to performs its own operation when the microphone 201 and the ADC 203 are activated.
For example, when the related content providing application 223 is implemented to have the function of extracting the sound wave ID, the related content providing application 223 is activated to perform its own operation when the microphone 201 and the ADC 203 are activated.
For example, in an embodiment in which the sound wave ID extraction function is performed by the sound wave ID extraction unit 205, the related content providing application 223 may be activated only when the sound wave ID is successfully extracted. The related content providing application 223 which is activated may transmit the extracted sound wave ID to the server 300, and receive a related content corresponding to the sound wave ID from the server 300 and display the related content through the display unit 213 as a voice and/or an image. Alternatively, the related content providing application 223 which is activated may search and select the related content corresponding to the extracted sound wave ID, and display the related content through the display unit 213 as a voice and/or an image. In this case, the related content database may be loaded into the memory 209, and the related content providing application 223 may scan the related content database and select the related content corresponding to the sound wave ID.
For example, the microphone 201 may be activated after being in an inactivation state according to a schedule or when an event occurs. When the microphone 201 detects a sound, for example, a sound belonging to a predetermined band, the other elements (for example, the ADC 203, the related content providing application 223, and the sound wave ID extraction unit 205) may be activated simultaneously or in sequence.
For example, when the sound is detected by the microphone 201, an element for determining whether the detected sound has an intensity greater than or equal to a predetermined threshold (for example, the related content providing application) may be activated. In addition, when the detected sound is determined to have the intensity greater than or equal to the predetermined threshold, an element for extracting the sound wave ID (for example, the sound wave ID extraction unit) may be activated.
The schedule will be described in detail in embodiments described with reference to
For example, at least one of the microphone 201, the ADC 203, the sound wave ID extraction unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be activated periodically or aperiodically after being in an inactivation state according to a schedule which is made with reference to a main content schedule. For example, at least one of the microphone 201, the ADC 203, the sound wave ID extraction unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be activated in a predetermined period during the time that the main content is played. Alternatively, at least one of the microphone 201, the ADC 203, the sound wave ID extraction unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be operated in a predetermined period during the time that the main content is played, and may be operated in a period which is longer than the predetermined period during the time that the main content is not played.
For example, at least one of the microphone 201, the ADC 203, the sound wave ID extraction unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be activated periodically or aperiodically according to a schedule which is made with reference to a sound wave ID schedule. For example, the at least one element may be activated at the time when the sound wave ID is extractable.
For example, at least one of the microphone 201, the ADC 203, the sound wave ID extraction unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be performed in a predetermined period, and, when the extraction of the sound wave ID fails but it is determined that there is the sound wave ID, may be activated in a period which is shorter than the predetermined period. Thereafter, when the ID is extracted, the at least one element is activated in the original period.
For example, at least one of the microphone 201, the ADC 203, the sound wave ID extraction unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be performed in a predetermined period, and, when the following equation is satisfied, may be performed in a period which is shorter than the predetermined period:
Threshold>Intensity of detected sound>(α×threshold), (herein, α is smaller than 1).
That is, in an exemplary embodiment, when the intensity of the detected sound is smaller than the threshold but is close to the threshold, it is determined that there is likely to be the sound having the sound wave ID, and, when the above equation is satisfied, the at least one element is activated in a period which is shorter than the original period.
For example, at least one of the microphone 201, the ADC 203, the sound wave ID extraction unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be activated when a predetermined event occurs. Herein, the event refers to a case in which a specific application is executed, a case in which the display unit of the mobile receiver 200 is activated or inactivated, or a case in which the mobile receiver 200 receives a push message from the outside.
Herein, the specific application may be an application which uses the microphone for its own use. However, this is merely an example and the specific application may be other applications.
For example, at least one of the microphone 201, the ADC 203, the sound wave ID extraction unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be performed in a predetermined period, and, when the predetermined event occurs, may not be performed. In addition, when the predetermined event is finished, the at least one element is activated according to the original period.
For example, when the predetermined event occurs, the processor 207 may deliver occurrence of the event to an operating system (OS) (not shown) of the mobile receiver 200 (which may be loaded into the memory 209), and the OS may deliver the occurrence of the event to filters (not shown) (which may be loaded into the memory 209) provided to correspond to respective applications operating in the mobile receiver 200. The filters may deliver or not deliver the occurrence of the event to the applications corresponding thereto according to predetermined settings. These operations will be described in detail in the embodiments described with reference to
Referring to
Comparing the embodiment of
According to an exemplary embodiment, at least one of the microphone 401, the low-power sound intensity measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the related content providing application 423 may be activated according to a schedule or may be activated when a predetermined event occurs. The schedule may be made to activate the element periodically or aperiodically. The schedule will be described in detail in the embodiments described with reference to
For example, the microphone 401, the low-power sound intensity measurement unit 402, and the ADC 403 may be activated to perform their own operations according to a schedule or when a predetermined event occurs. In this example, the microphone 401, the low-power sound intensity measurement unit 402, and the ADC 403 are elements which are activated after being in an inactivation state according to a schedule or when a predetermined event occurs.
For example, the microphone 401, the low-power sound intensity measurement unit 402, the ADC 403, and the sound wave pattern recognition unit 404 may be activated to perform their own operations according to a schedule or when a predetermined event occurs.
For example, the related content providing application 423 may be activated only when a sound wave ID is successfully extracted. The related content providing application 423 which is activated may transmit the extracted sound wave ID to the server 300, and receive a related content corresponding to the sound wave ID from the server 300 and display the related content through the display unit 413 as a voice and/or an image. Alternatively, the related content providing application 423 which is activated may search and select a related content corresponding to the sound wave ID, and display the related content through the display unit 413 as a voice and/or an image. In this case, a related content database may be loaded into the memory 409, and the related content providing application 423 may scan the related content database and select the related content corresponding to the sound wave ID.
For example, the microphone 401 may be activated after being in an inactivation state according to a schedule or when an event occurs, and, when the microphone 401 detects a sound (for example, a sound belonging to a predetermined band), the other elements (for example, the ADC 403, the related content providing application 423, the low-power sound intensity measurement unit 402, or the sound wave pattern recognition unit 401) may be activated simultaneously or in sequence.
For example, when a sound is detected by the microphone 401, the low-power sound intensity measurement unit to determine whether the detected sound has an intensity greater than or equal to a predetermined threshold may be activated. In addition, when it is determined that the detected sound has the intensity greater than or equal to the predetermined threshold, the sound wave pattern recognition unit 404 to extract a sound wave ID may be activated.
For example, at least one of the microphone 401, the low-power sound intensity measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the related content providing application 423 may be activated periodically or aperiodically after being in an inactivation state according to a schedule which is made with reference to a main content schedule. For example, at least one of the microphone 401, the low-power sound intensity measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the related content providing application 423 may be activated in a predetermined period during the time that the main content is played. Alternatively, at least one of the microphone 401, the low-power sound intensity measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the related content providing application 423 may be operated in a predetermined period during the time that the main content is played, and may be operated in a period which is longer than the original period during the time that the main content is not played.
For example, at least one of the microphone 401, the low-power sound intensity measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the related content providing application 423 may be activated periodically or aperiodically according to a schedule which is made with reference to a sound wave ID schedule. For example, the at least one element may be activated at the time when the sound wave ID can be extracted.
For example, at least one of the microphone 401, the low-power sound intensity measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the related content providing application 423 may be activated in a predetermined period, and, when the extraction of the sound wave ID fails but it is determined that there is the sound wave ID, the at least one element may be activated in a period which is shorter than the original period. Thereafter, when the ID is extracted, the at least one element may be operated in the original period.
For example, at least one of the microphone 401, the low-power sound intensity measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the related content providing application 423 may be operated in a predetermined period, and, when the following equation is satisfied, may be operated in a period which is shorter than the predetermined period:
Threshold>Intensity of detected sound>(α×threshold), (herein, α is smaller than 1)
That is, when the intensity of the detected sound is smaller than the threshold but is close to the threshold, it is determined that there is likely to be the sound having the sound wave ID, and, when the above equation is satisfied, the above-described elements may be activated in a period which is shorter than the original period.
For example, at least one of the microphone 401, the low-power sound intensity measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the related content providing application 423 may be activated when a predetermined event occurs. Herein, the event may refer to a case in which a specific application is executed, a case in which the display unit of the mobile receiver 200 is activated or inactivated, or a case in which the mobile receiver 200 receives a push message from the outside.
Herein, the specific application may be an application which uses a microphone for its own use. However, this is merely an example and the specific application may correspond to other applications.
For example, at least one of the microphone 401, the low-power sound intensity measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the related content providing application 423 may be operated in a predetermined period, and may not be operated when the predetermined event occurs. In addition, when the predetermined event is finished, the at least one element may be activated according to the original period.
For example, when the predetermined event occurs, the processor 407 may deliver occurrence of the event to an operating system (OS) (not shown) of the mobile receiver 400 (which may be loaded into the memory 409), and the OS may deliver the occurrence of the event to filters (not shown) (which may be loaded into the memory 409) provided to correspond to respective applications operating in the mobile receiver 400. The filters may deliver or not deliver the occurrence of the event to the applications corresponding thereto according to predetermined settings. These operations will be described in detail in the embodiments described with reference to
According to an exemplary embodiment, the low-power sound intensity measurement unit 402 may consume low power, but may be continuously activated, and the sound wave pattern recognition unit 404 may consume more power than the low-power sound intensity measurement unit 402, but may be frequently inactivated and may be activated according to a schedule or when a predetermined event occur (that is, intermittently activated). The low-power sound intensity measurement unit 402 may detect a sound within a predetermined frequency range, and continuously determine whether the intensity of the detected sound is greater than or equal to a predetermined threshold.
In an embodiment, the low-power sound intensity measurement unit 402 may determine whether the sound inputted through the microphone 401 has an intensity greater than or equal to a predetermined threshold within a predetermined frequency range. When the intensity is greater than or equal to the threshold, the low-power sound intensity measurement unit 402 may activate the ADC 403 and the sound wave pattern recognition unit 404 which have been inactivated for a low-power operation. In this case, the activated ADC 403 may convert the sound into digital data and transmit the digital data to the activated sound wave pattern recognition unit 404.
According to an exemplary embodiment, the sound wave pattern recognition unit 404 may receive the sound in the digital data form from the ADC 403, and may extract a sound wave ID from the received sound when the sound wave ID is included in the sound. In this case, when the processor 407 is inactivated to operate at low power, the sound wave pattern recognition unit 404 may activate the processor 407. In an exemplary embodiment, the sound wave pattern recognition unit 404 may provide the extracted sound wave ID to the related content providing application 423.
According to another exemplary embodiment, the sound wave pattern recognition unit 404 may receive the sound wave in the digital data form from the ADC 403, determine whether it is possible to extract a sound wave ID from the sound received from the ADC 403 by comparing the received sound wave and at least one pre-stored sound wave ID, and, when it is determined that it is possible to extract the sound wave ID, directly extract the sound wave ID. According to an alternative embodiment, the sound wave pattern recognition unit 404 may determine whether it is possible to extract the sound wave ID from the sound received from the ADC 403, and, when it is determined that it is possible to extract the sound wave ID, provide the sound received from the ADC 403 to the related content providing application 423, and the related content providing application 423 may extract the sound wave ID.
A technique of implementing the operation of detecting and extracting the sound wave ID by using separate hardware may use technologies disclosed in the patent documents presented below.
For example, a technique of extracting a sound wave which is modulated to a non-audible sound at low power may use technology disclosed in Korean Patent Application No. 10-2013-0141504, filed on Nov. 20, 2013, by the same inventor as the present application with the KIPO (titled “Method for Receiving Low-Power Sound Wave and Mobile Device Using the Same”).
The technologies disclosed in the specification of Korean Patent Application No. 10-2013-0141504 are incorporated into the specification of the present application and are dealt with as a part of the specification of the present application.
A DMB application or a video player which is operated under the control of the processor 407, or a codec 221 necessary for playing a content through the DMB application or the video player may be loaded into the memory 409. The DMB application, the video player, or the codec 221 may be stored in the storage device 411, and may be loaded into the memory 409 and operated according to a user's command. In addition, although not shown in the drawings, when a message is received from the server 300, a message processing program (not shown) may be loaded into the memory 409 and display the message through the display unit 413.
Elements which are not described in the embodiment of
Extraction of Low-Power Sound Wave ID
The term “sound recognition module” used in the specification of the present application refers to at least one of the microphone, the ADC, the sound wave ID extraction unit, the related content providing application, the low-power sound intensity measurement unit, and the sound wave pattern recognition unit which are included in the mobile receiver for the purpose of describing the present invention, and the at least one element may detect or detect a sound wave ID or activate an element for detecting or extracting the sound wave ID.
According to an exemplary embodiment of the present invention, the sound recognition module may be activated according to a schedule or when a predetermined event occurs (or intermittently), so that the sound recognition module can be operated at low power. When the sound recognition module is activated, the sound recognition module may detect a sound and detect and extract a sound ID. In this case, the activation may refer to performing overall operations, such as executing a necessary hardware module, executing necessary software, or requesting necessary input information from an operating system, in order to switch from a state in which the sound recognition module performs a minimum function to minimize power consumption to a state in which the sound recognition module can detect and extract the sound wave ID.
The intermittent activation to operate at low power is illustrated in the embodiments of
The intermittent activation to operate at low power may indicate (A) a state in which the sound recognition module is activated and operated for a period of 10 minutes, which is illustrated in
(A) Periodic Activation
View (a) of
(B) Activation According to a Broadcasting Time of a Content and a Time at which a Sound Wave ID is Extractable.
According to an exemplary embodiment, when the sound recognition module is periodically activated, the activation period may be determined in consideration of a time location of a sound wave ID which is extracted from a main content outputted by the content playing device 100, and continuity of the extractable sound wave ID. For example, it may be assumed that a content having a pattern of total 4 minutes, in which the sound wave ID in the content can be continuously and repeatedly extracted for the first 2 minutes but cannot be extracted for the second 2 minutes, is outputted continuously. In this example, when the sound recognition module recognizes whether it is possible to extract the sound wave ID on a basis of at most 2 minutes, the sound recognition module can extract the sound wave ID from the content within short time and also can intermittently try to extract the sound wave ID, so that power consumption which is caused by the extraction of the sound wave ID can be reduced.
According to another exemplary embodiment, when the broadcasting date and time of a content outputted from the content playing device 100 can be known in advance, the sound recognition module may be activated for a shorter period at the corresponding time that the content is broadcasted. For example, it is assumed that the period of the sound recognition module is set to 10 minutes. In this case, when a specific content of a TV is scheduled to be aired at 6 p.m. on every Saturday for one hour, and it is possible to extract a sound wave ID from the content, the sound recognition module may be operated in a period of 5 minutes rather than 10 minutes from 6 p.m. until 7 p.m. on every Saturday. In addition, the sound recognition module which has been continuously inactivated may be activated for only 1 hour from 6 p.m. until 7 p.m. on every Saturday.
The two exemplary embodiments described above will be described with reference to
(C) Activation According to a Result of Extracting a Previous Sound Wave ID
Referring to
(D) Activation According to an Internal Event of a Mobile Receiver
According to an exemplary embodiment, the sound recognition module may be activated and operated aperiodically according to an event which occurs in the mobile receiver 200 or 400. This will be described below with reference to
Various applications may be operated on the mobile receiver 200 or 400 simultaneously. In this case, the microphone 201 or 401 which is a part of the mobile receiver 200 or 400 is a resource which is shared by various applications, and may be shared in a First-Come First-Served (FCFS) method, that is, in such a manner that another application cannot use the microphone 201 until an application using the microphone 201 first is stopped. In this case, the sound recognition module which is intermittently activated may prevent another application from using the microphone 201 or 401 even when the application tries to use the microphone 201 or 401. Therefore, when an application which is able to use the microphone 201 or 401 is detected or when the sound recognition module fails to try to use the microphone 201 or 401, the sound recognition module may defer being activated until the application is finished, and thus may not interfere with the operation of another application using the microphone 201 or 401 and also may reduce the total number of activations, so that the mobile receiver can be operated at low power.
For example, application 1 included in the sound recognition module may be loaded into the memory 209 or 409 of the mobile receiver 200 or 400 at time t1 as shown in
In another example, when a camera application is activated, the application may exclusively use a part of the sound recognition module to make a video. In this case, the sound recognition module may stop periodic activation so that the camera application can exclusively use a part of the sound recognition module, and may be activated when the cameral application is finished.
It is assumed that the sound recognition module of the mobile receiver 200 or 400 is activated at time t4 and tries to extract a sound wave ID. An originally scheduled period is period 7 and the sound recognition module may be scheduled to be activated again at time t6. However, the operating system may notify the sound recognition module that application 4 using the microphone 201 or 401 is executed at time t5 (the method for detecting or receiving a notification will be described in detail with reference to
For example, when a telephone application is activated, this application normally uses the microphone 201 or 401 and the ADC 203 or 403 and activates the processor 207 or 407. Therefore, when the sound recognition module is activated simultaneously, it is possible to extract a sound wave ID at low power. In another example, when a video player, a DMB application, or a codec 221 or 421 is operated, the mobile receiver 200 or 400 may be deemed to serve as a content playing device and thus the sound recognition module may be activated to provide a related content.
In
In
The case of
The mobile receiver 200 or 400 may include a function of receiving a phone call. In this case, when a phone call comes in, the operating system may operate an application to receive the phone call, and deliver a phone call reception event to filters of respective applications so that the other applications respond to the calling event. For example, application 5 may be an application for processing phone call reception and application 6 may be an application which is playing back music. When the phone call comes in, the operating system delivers the calling event to the filters of the applications. Since application 6 should stop playing back the music when the phone call comes in, the filter of application 6 may be set to pass the call reception event. Since the filter of application 5 should prepare a user interface and make it possible to receive the phone call when the phone call comes in, application 5 may be set to pass the call reception event. It is assumed that the operating system can deliver digital data which is inputted through the microphone 201 or 401 and inputted through the ADC 203 or 403 to various applications simultaneously. In this case, the sound recognition module may receive sound digital data and extract the sound wave ID. Therefore, an application corresponding to the application of the sound recognition module may set its own filter to pass the phone call reception event. To be brief, the event necessary for operating at low power from among the events transmitted by the operating system is set to pass through the filter, so that the sound recognition module can extract the sound wave ID at low power.
The case of
When application 2 of
The case of
The broadcasting or playing schedule of main content 1 and main content 2 of
The case of
In
Method for Providing a Related Content Based on a Related Content DB of the Server
On the assumption that the method for providing the related content according to an exemplary embodiment is applied to the system of
The method for providing the related content according to an exemplary embodiment may include the steps of: outputting, by the content playing device 100, a main content (S101); activating the sound recognition module of the mobile receiver (S103); extracting a sound wave ID (S105); receiving, by the server 300, the sound wave ID extracted in step S105, and searching and selecting a related content corresponding to the sound wave ID (S107); and displaying, by the mobile receiver 200, the related content selected in step S107 (S109).
In step 103, the sound recognition module may be activated according to a schedule or may be activated when a predetermined event occurs. The schedule may be made to activate the sound recognition module periodically and/or aperiodically. For example, the schedule may be made with reference to a main content schedule (a schedule indicating times at which a main content is played), or a sound wave ID schedule (a schedule indicating times at which the sound wave ID can be extracted, but this is merely an example. The schedule may not necessarily be made with reference to the main content schedule or the sound wave ID schedule.
According to an exemplary embodiment, the schedule may be made to activate the sound recognition module periodically or may be made to activate the sound recognition module aperiodically. Examples of the sound recognition module activated according to a schedule have been illustrated in the embodiments described with reference to
Examples of the sound recognition module activated according to an event have been illustrated in the embodiments described with reference to
When a related content providing application is used as a part of the sound recognition module, the related content providing application which is operated in the memory 209 of the mobile receiver 200 transmits the extracted sound wave ID to the server 300. The server 300 may search the related content corresponding to the received sound wave ID (S107). Thereafter, the server 300 transmits the selected related content to the mobile receiver 200, and the mobile receiver 200 displays the related content for the user (S109).
Method for Providing a Related Content Based on a Related Content DB of the Mobile Receiver
On the assumption that the method for providing the related content according to an exemplary embodiment is applied to the system of
The method for providing the related content according to an exemplary embodiment may include the steps of: outputting, by the content playing device 100, a main content (S201); activating the sound recognition module of the mobile receiver (S203); extracting a sound wave ID (S207); searching and selecting, by the mobile receiver, a related content corresponding to the sound wave ID extracted in step S205 (S209); and displaying, by the mobile receiver 200, the related content selected in step S209 (S211).
The method for providing the related content according to an exemplary embodiment may further include a step of transmitting, by the server, data for updating regarding a related content DB to the mobile receiver (S205), and the mobile receiver may receive the data for updating from the server and update the related content DB stored therein.
For example, the related content providing application which operates in the memory 209 of the mobile receiver may store the related content DB received from the server 300 in the storage device 211 or update the pre-stored DB. For example, the related content providing application operating in the memory 209 may search and select the related content DB corresponding to the sound wave ID (S209), and display the selected related content through the display unit 213 (S211).
The step of activating the sound recognition module of the mobile receiver (S203) may be activating according to a schedule or activating when a predetermined event occurs in the same way as the step of activating described above with reference to
Unlike in the embodiment of
Time to transmit the related content DB by the server 300 is not limited to the time displayed on
Method for the Mobile Receiver to Serve as a Content Playing Device
It is assumed that, in the system of
The method for providing the related content according to an exemplary embodiment may include the steps of: outputting, by the mobile receiver 200, a main content (S301); activating the sound recognition module of the mobile receiver (S305); extracting an sound wave ID (S307); searching and selecting, by the server, a related content corresponding to the sound wave ID extracted in step S307 (S309); and displaying, by the mobile receiver 200, the related content selected in step S309 (S311).
The step of activating the sound recognition module of the mobile receiver (S305) may be activating according to a schedule or activating when a predetermined event occurs in the same way as the step of activating described above with reference to
Step S307 may be implemented in the following methods, for example.
The first method is to cause the microphone 201 to detect a sound outputted through a speaker (not shown) which is a part of the display unit 213 of the mobile receiver 200, and cause the sound recognition module to extract an ID from the detected sound.
The second method is to cause the sound recognition module to receive digital data from the operating system before the digital data is outputted through the speaker (not shown) which is a part of the display unit 213 of the mobile receiver 200, and extract the sound wave ID.
The third method is to cause an input terminal of the microphone 201 of a headset (not shown) connected with the mobile receiver 200 to be connected with a small sound output terminal of the headset (not shown), and cause the sound recognition module to extract an ID from an electric signal transmitted to a small speaker of the headset (not shown).
The sound recognition module transmits the sound wave ID extracted in step S307 to the server 300 (S307). The server 300 may search and select a related content corresponding to the sound wave ID with reference to the related content DB (which is stored in the server 300 or stored in a separate storage device (not shown)) (S309). The server 300 transmits the result of the selecting to the mobile receiver 200, and the mobile receiver 200 displays the result of the selecting.
It is assumed that, in the system of
The method for providing the related content according to an exemplary embodiment may include the steps of: outputting, by the content playing device 100, a main content (S401); activating the sound recognition module of the mobile receiver (S403); extracting a sound wave ID (S407); searching and selecting, by the mobile receiver, a related content corresponding to the sound wave ID extracted in step S407 (S409); and displaying, by the mobile receiver 200, the related content selected in step S409 (S411).
The method for providing the related content according to an exemplary embodiment may further include the step of transmitting, by the server, data for updating regarding a related content DB to the mobile receiver (S405), and the mobile receiver may receive the data for updating from the server and update the related content DB stored therein.
Step S407 described above may be implemented in the same way as step S307 described above.
The step of activating the sound recognition module of the mobile receiver (S403) may be activating according to a schedule or activating when a predetermined event occurs in the same way as the step of activating described with reference to
Unlike in the embodiment of
Time at which the server 300 transmits the related content DB is not limited to the time displayed on
Location of a Sound Wave ID Extractable from a Content
View (a) of
For example, on the assumption that the main contents as shown in view (a) of
The content playing device 100 may output main contents (main contents 1, 2, and 3) with time, and a sound wave ID may be extracted from each of these contents. Sound wave ID 1 may be extracted from a sound when main content 1 is outputted. Sound wave ID 2 may be extracted from a sound when main content 2 is outputted. Sound wave ID 3 may be extracted from a sound when main content 3 is outputted.
The mobile receiver 200 may extract the respective sound wave IDs in sequence and transmit these sound wave IDs to the server 300, and receives related contents corresponding to the sound wave IDs from the server 300 and displays the related contents.
The mobile receiver 200 may extract the sound wave ID according to a schedule or when a predetermined event occurs (that is, intermittently). This is the same as in the above-described exemplary embodiments, and thus a detailed description thereof is omitted.
View (b) of
View (b) of
For example, on the assumption that the main contents as shown in view (b) of
The mobile receiver 200 may output main contents (main contents 4, 5, and 6) with time, and extract the sound wave IDs from the main contents. Sound wave ID 1 may be extracted from a sound when main content 4 is outputted. Sound wave IDs 2 and 3 may be extracted from a sound when main content 5 is outputted. Sound wave ID 4 may be extracted from a sound when main content 6 is outputted.
The mobile receiver 200 may extract the sound wave IDs in sequence and transmit these IDs to the server 300, and receives related contents corresponding to the sound wave IDs from the server 300 and displays the related contents.
Method for Outputting a Related Content without Interfering with Immersion
According to an exemplary embodiment of the present invention, time to display a related content through the mobile receiver may be time when a viewer or listener does not have to get immersed in the main content, for example, when an advertisement or an end title is displayed. For example, when the main content is a drama and the related content is information on an indirect advertisement product appearing in the drama, the mobile receiver may defer displaying the related content on the mobile receiver until the drama ends and the end title is displayed even when the sound wave ID from which the related content is obtained is located in the middle of the drama.
A computer readable recording medium which has a program recorded therein for executing the steps of the method described above with reference to
Method for Providing a Related Content
On the assumption that the method for providing the related content according to an exemplary embodiment of the present invention is applied to the system of
Referring to
In addition, the method for providing the related content according to an exemplary embodiment of the present invention may further include a step of determining whether the sound detected in step S501 has an intensity greater than or equal to a predetermined threshold or not, and the step of determining may be performed between steps S501 and S505.
For example, the step of determining may be performed between steps S501 and S503. In this case, the step of converting the sound detected in step S501 into digital data (S503) may be performed when it is determined that the sound detected in step S501 has the intensity greater than or equal to the predetermined threshold.
In addition, in the method for providing the related content according to an exemplary embodiment, the step of selecting the related content (S507) may be performed by the mobile terminal 200 or the server 300. When the step of searching and selecting the related content (S507) is implemented to be performed by the server 300, the method for providing the related content according to an exemplary embodiment of the present invention may further include the steps of: transmitting, by the mobile receiver 200, the sound wave ID extracted in step S505 to the server 300; and transmitting the related content selected by the server 300 to the mobile receiver 200. Herein, the step of transmitting the sound wave ID to the server 300 may be performed after step S505, and the step of transmitting the related content selected by the server 300 to the mobile receiver 200 may be performed between steps S507 and S509.
In addition, the method for providing the related content according to an exemplary embodiment may further include the steps of: determining whether the sound detected in step S501 has an intensity greater than or equal to a predetermined threshold or not (referred to as step S502 for the purpose of explaining); and activating at least one element of an element for performing step S501, an element for performing step S502, an element for performing step S503, an element for performing step S505, and an element for performing step S507 (referred to as step S504 for the purpose of explaining).
For example, the step of activating (S504) may be the step of activating the element for performing step S501, the element for performing step S502, and the element for performing step S503. In this example, the element for performing step S501, the element for performing step S502, and the element for performing step S503 may be elements which are activated after being in an inactivation state, and the element for performing step S505 and the element for performing step S507 are already activated.
For example, the step of activating (S504) may include the steps of: activating the element for performing step S501 and the element for performing step S502; and, when the result of performing step S502 succeeds, activating the element for performing step S503 and the element for performing step S507.
For example, the step of activating (S504) may include the steps of: activating the element for performing step S501; when a sound is detected as a result of performing step S501 (for example, when a sound belonging to a predetermined band is detected), activating the element for performing step S502 and the element for performing step S503. Herein, the element for performing step S507 and the element for performing step S509 may be already activated, may be activated when step S503 succeeds, or may be activated when step S503 is activated.
For example, the step of activating (S504) may include the steps of: when a sound is detected as a result of performing step S501, activating the element for performing step S502; and, when it is determined that the sound detected in step S501 has an intensity greater than or equal to a predetermined threshold as a result of performing step S502, activating the element for performing step S503 and the element for performing step S505. Herein, when a sound wave ID is extracted as a result of performing step S505, the element for activating step S507 and the element for performing step S509 may be activated simultaneously and in sequence.
For example, the step of activating (S504) may be performed according to a schedule or when a predetermined event occurs. The schedule has been described in detail in the embodiments described with reference to
For example, the step of activating (S504) may be performed periodically and/or aperiodically according to a schedule which is made with reference to a main content schedule. For example, the step of activating (S504) may be performed in a predetermined period during the time that a main content is played. Alternatively, the step of activating (S504) may be performed in a first predetermined period during the time that the main content is played, and may be performed in a second period which is longer than the first period during the time that the main content is not played.
For example, the step of activating (S504) may be performed periodically and/or aperiodically according to a schedule which is made with reference to a sound wave ID schedule. For example, the step of activating (S504) may be performed at the time when the sound wave ID can be extracted.
For example, the step of activating (S504) may be performed in a third predetermined period, and, when the extraction of the sound wave ID fails, but it is determined that there exists the sound wave ID, the step of activating (S504) may be performed in a fourth period which is shorter than the third period. Thereafter, when the ID is extracted, the step of activating (S504) may be performed in the third period.
For example, the step of activating (S504) may be performed in a fifth predetermined period and, when the following equation is satisfied as a result of performing step S502, may be performed in a sixth period which is shorter than the fifth period:
Threshold>Intensity of detected sound>(α×threshold), (herein, α is smaller than 1).
That is, according to an exemplary embodiment, when the intensity of the detected sound is smaller than the threshold but is close to the threshold, it may be determined that it is very likely that there is a sound having a sound wave ID, and, when the above-stated equation is satisfied, the step of activating in a sixth period which is shorter than the fifth period may be performed.
For example, the step of activating (S504) may be performed when a predetermined event occurs. Herein, the event may be an event in which a specific application is executed, an event in which the display unit of the mobile receiver 200 is activated or inactivated, or an event in which the mobile receiver 200 receives a push message from the outside.
Herein, the specific application may be an application which uses a microphone for its own use, but this is merely an example and the specific application may correspond to other applications.
For example, the step of activating (S504) may be performed in a seventh predetermined period, and, when a predetermined event occurs, may not be performed. In addition, when the predetermined event is finished, the step of activating (S504) may be performed again according to a predetermined period (for example, the seventh period).
For example, the step of activating (S504) may include the steps of: when a predetermined event occurs, delivering the occurrence of the event to an operating system (OS) (not shown) of the mobile receiver 200; delivering, by the OS, the occurrence of the event to filters provided in applications operating in the mobile receiver 200; and delivering or refraining from delivering, by the filters, the occurrence of the event to the applications corresponding thereto according to predetermined settings. This operation has been described in detail in the embodiments described with reference to
In the method for providing the related content described above with reference to
In the method for providing the related content described above with reference to
When the mobile receiver is configured to receive the sound outputted from the mobile receiver, the method for providing the related content may include the steps of: detecting, by the mobile receiver, a sound outputted from the mobile receiver; converting, by the mobile receiver, the detected sound into digital data; extracting, by the mobile receiver, a sound wave ID from the digital data; selecting a related content corresponding to the extracted sound wave ID; and displaying, by the mobile receiver, the selected related content as an image and/or a voice. In addition, the method for providing the related content may further include the steps of: determining whether the detected sound has an intensity greater than or equal to a predetermined threshold; and activating at least one element of an element for performing the step of converting, by the mobile receiver, the detected sound into the digital data, an element for performing the step of extracting, by the mobile receiver, the sound wave ID from the digital data; an element for performing the step of selecting the related content corresponding to the extracted sound wave ID; an element for performing the step of displaying, by the mobile receiver, the selected related content as the image and/or the voice. The step of activating has been described in detail in the above-described embodiments.
Computer-Readable Medium which has a Program Recorded Therein for Executing a Method for Providing a Related Content
The method for providing the related content according to the above-described exemplary embodiments of the present invention may be implemented by using a program executed in a mobile receiver which is provided with a memory, a processor, a microphone, and an ADC (referred to as a “program for executing the method for providing the related content” for the purpose of describing). That is, the mobile receiver provided with the memory, the processor, the microphone, and the ADC has a function as a computer, and the method for providing the related content may be implemented in the form of a program which is executed in the mobile receiver.
The program for executing the method for providing the related content according to an exemplary embodiment of the present invention may include a related content providing application.
According to an exemplary embodiment, the program for executing the method for providing the related content may be loaded into the memory and executed under the control of the processor
The program for executing the method for providing the related content may execute one of the methods described in the embodiments described above with reference to
For example, the program for executing the method for providing the related content may execute the steps which can be implemented by using a program from among the steps described with reference to
For example, the program for executing the method for providing the related content may execute the steps S502, S503, S504, S505, S507, and/or S509 in the mobile receiver. That is, all of the steps S502, S503, S504, S505, S507, and S509 may be implemented by using programs, or some of the steps S502, S503, S504, S505, S507, and S509 may be implemented by using hardware and the other steps may be implemented by using a program. The steps S502, S503, S504, S505, S507, and S509 have been described in detail in the embodiments described with reference to
While the invention has been described with reference to certain preferred embodiments thereof and drawings, the present invention is not limited to the above-described embodiments and various changes or modification may be made based on the descriptions provided herein by those skilled in the art.
The scope of the present disclosure should not be limited to and defined by the above-described exemplary embodiments, and should be defined not only by the appended claims but also by the equivalents to the scopes of the claims.
Patent | Priority | Assignee | Title |
10765953, | Dec 27 2017 | Nintendo Co., Ltd. | Information processing system, information processing method, information processing apparatus, and non-transitory storage medium having stored therein information processing program |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 28 2015 | SOUNDLLY INC. | (assignment on the face of the patent) | / | |||
Sep 06 2016 | KIM, TAE HYUN | SOUNDLLY INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 039689 | /0563 | |
Dec 13 2019 | SOUNDLLY INC | ONNURIDMC INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051305 | /0144 | |
Oct 20 2020 | ONNURI DMC INC | MOTIVINTELLIGENCE INC | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 055037 | /0480 |
Date | Maintenance Fee Events |
Apr 19 2021 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
Date | Maintenance Schedule |
Oct 17 2020 | 4 years fee payment window open |
Apr 17 2021 | 6 months grace period start (w surcharge) |
Oct 17 2021 | patent expiry (for year 4) |
Oct 17 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 17 2024 | 8 years fee payment window open |
Apr 17 2025 | 6 months grace period start (w surcharge) |
Oct 17 2025 | patent expiry (for year 8) |
Oct 17 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 17 2028 | 12 years fee payment window open |
Apr 17 2029 | 6 months grace period start (w surcharge) |
Oct 17 2029 | patent expiry (for year 12) |
Oct 17 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |