Among other things, a method includes receiving, from a user of a user interface of a media authoring application, first settings specifying a destination for media content authored in the media authoring application, the destination selected from a set of destinations supported by the media authoring application, and second settings specifying a set of media characteristics of media content authored in the media authoring application, and generating data representing a user interface element usable to provide media content authored in the media authoring application to the destination specified in the first settings at the set of media characteristics specified in the second settings.
|
1. A method comprising:
receiving, from a user of a user interface of a media authoring application, first settings specifying a destination for first media content authored in the media authoring application, the destination selected from a set of destinations supported by the media authoring application, and second settings specifying a set of media characteristics of the first media content authored in the media authoring application; and
generating data associated with the first settings and the second settings, the data usable by an instance of the media authoring application to display a user interface element in a user interface of the instance of the media authoring application, wherein the user interface element, when invoked by a user input, causes the instance of the media authoring application to, in response to the user input, both
1) generate second media content comprising at least one frame of video, the at least one frame of video formatted based on the set of media characteristics specified in the second settings, and
2) provide the formatted second media content to the destination specified in the first settings.
10. A non-transitory computer readable storage device encoded with instructions that, when executed by a computer system, cause a computer system to carry out operations comprising:
receiving, from a user of a user interface of a media authoring application, first settings specifying a destination for first media content authored in the media authoring application, the destination selected from a set of destinations supported by the media authoring application, and second settings specifying a set of media characteristics of the first media content authored in the media authoring application; and
generating data associated with the first settings and the second settings, the data usable by an instance of the media authoring application to display a user interface element in a user interface of the instance of the media authoring application, wherein the user interface element, when invoked by a user input, causes the instance of the media authoring application to, in response to the user input, both
1) generate second media content comprising at least one frame of video, the at least one frame of video formatted based on the set of media characteristics specified in the second settings, and
2) provide the formatted second media content to the destination specified in the first settings.
19. A system comprising:
one or more processors;
at least one non-transitory computer-readable storage device including executable instructions which, when executed by the one or more processors, cause operations comprising:
receiving, from a user of a user interface of a media authoring application, first settings specifying a destination for first media content authored in the media authoring application, the destination selected from a set of destinations supported by the media authoring application, and second settings specifying a set of media characteristics of the first media content authored in the media authoring application; and
generating data associated with the first settings and the second settings, the data usable by an instance of the media authoring application to display a user interface element in a user interface of the instance of the media authoring application, wherein the user interface element, when invoked by a user input, causes the instance of the media authoring application to, in response to the user input, both
1) generate second media content comprising at least one frame of video, the at least one frame of video formatted based on the set of media characteristics specified in the second settings, and
2) provide the formatted second media content to the destination specified in the first settings.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
11. The non-transitory computer readable storage device of
12. The non-transitory computer readable storage device of
13. The non-transitory computer readable storage device of
14. The non-transitory computer readable storage device of
15. The non-transitory computer readable storage device of
16. The non-transitory computer readable storage device of
17. The non-transitory computer readable storage device of
18. The non-transitory computer readable storage device of
20. The system of
21. The system of
22. The system of
23. The system of
24. The system of
25. The system of
26. The system of
27. The system of
|
The disclosure generally relates to sharing media content.
Media content, for example, images, audio, and video, can be authored in a media authoring application (e.g., image editor, video editor, sound editor) and published for distribution. Media content that is published in this way is sometimes referred to media that has been shared. For example, media can be shared with users of systems such as social networking systems and online media repositories.
In one aspect, in general, a method includes receiving, from a user of a user interface of a media authoring application, first settings specifying a destination for media content authored in the media authoring application, the destination selected from a set of destinations supported by the media authoring application, and second settings specifying a set of media characteristics of media content authored in the media authoring application, and generating data representing a user interface element usable to provide media content authored in the media authoring application to the destination specified in the first settings at the set of media characteristics specified in the second settings.
Implementations may include one or more of the following features. A mechanism is provided to share the data representing the user interface element with users other than the user of the user interface of the media authoring application. The user interface element can be dragged from the user interface of the media authoring application to share the data in the form of a data file. The destination is a social networking service and the first settings include information about a user account of the social networking service. The first settings specify a second destination for the media content. The second settings specify a second set of media characteristics for the media content.
In another aspect, in general, a method includes receiving an indication that a user of a user interface of a media authoring application has invoked a first user interface element in a menu of user interface elements, each user interface element representing data that includes first settings specifying a destination for media content authored in the media authoring application, the destination selected from a set of destinations supported by the media authoring application, and second settings specifying a set of media characteristics of media content authored in the media authoring application, and in response to the user having invoked the first user interface element, providing media content authored in the media authoring application to the destination specified in the first settings at the set of media characteristics specified in the second settings.
Implementations may include one or more of the following features. Metadata of the media content is accessed and an interface is provided enabling the user to select at least some of the metadata to include in the media content supplied to the destination. The selected metadata is stored for subsequent provisions of the media content to other destinations selected from among the set of destinations supported by the media authoring application. For at least one user interface element in the menu of user interface elements, an indication is provided in the user interface of media file characteristics for media content generated if the user were to invoke the at least one user interface element.
In a further aspect, in general, a method includes receiving an indication that a user of a user interface of a video authoring application has invoked a user interface element representing data that includes first settings specifying a destination for video content authored in the video authoring application and second settings specifying a set of video characteristics of video content authored in the video authoring application, and in response to the user having invoked the first user interface element, for first video content chosen by the user, the first video content having chapters each representing a portion of the first video content, providing an interface enabling the user to choose a video frame from among video frames of each chapter, and generating metadata designating each chosen video frame as representative of the content of each chapter.
Implementations may include one or more of the following features. The video content and the generated metadata are provided to the destination specified in the first settings.
Other aspects may include corresponding systems, apparatus, or computer readable media.
Details of one or more implementations are set forth in the accompanying drawings and the description below. Other features, aspects, and potential advantages will be apparent from the description and drawings, and from the claims.
Like reference symbols in the various drawings indicate like elements.
A user interface of a media authoring application (e.g., video editing software) can be used to provide media content (e.g., a finished video) to a destination. The destination could be a social networking service like Facebook, Twitter, Tumblr, or YouTube. The destination could also be another type of destination such as a storage device or other software application. The user may prefer to specify different media characteristics (e.g., video quality, video file format, video resolution) for different destinations. Rather than require a user to specify media characteristics each time the user chooses a destination, the user can establish settings that tie a particular destination to a particular set of media characteristics. The settings can be placed in a menu in the application for the user to re-use with any media content being authored in the application.
The user interface could display information about media file characteristics (e.g., file size) if the user were to invoke a button in the menu and apply the underlying settings to media content that the user is authoring. If the media content is video, the user interface could also allow the user to choose a frame representative of each chapter of the video, for example, for use in a chapter index.
The media destinations 112a-c could be any destination for media content. For example, the media destinations 112a-c could include social networking services (e.g., Facebook, YouTube, LinkedIn, Pinterest), online media repositories (e.g., iCloud), web servers, storage devices (e.g., a hard drive in communication with the computer system 104), or any other destination for media. The act of publishing 108 media content 110 to the media destinations 112a-c may include any combination of generating a media data file 114 from the media content 110 and formatting the media content 110 in the media data file 114. Formatting could include formatting the media data file into a format usable by the respective media destination, or altering characteristics of the media content based on settings associated with the respective media destination. If the media content 110 is a video, for example, the act of publishing 108 the video could include formatting the video to a particular data format (e.g., MPEG), and changing the resolution or audio quality to a form preferred for a particular media destination.
The media authoring application 100 can be configured to publish media content 110 to media destinations 112a-c in particular ways. For example, the media authoring application 100 can have configurations 116a-c that specify instructions for publishing media content to particular media destinations 112a-c. Each configuration 116a-c could indicate at least one media destination and at least one set of media characteristics.
The instructions specified for publishing media content to a particular media destination can be of a type specific to the type of media destination. For example, if the media destination is a social networking service or a media repository, the configurations 116a-c could specify information about a user account of the social networking service or a media repository. If the media destination is a social networking service, the configurations 116a-c could specify information about privacy settings for the published media content (e.g., whether the media content will be visible by just the user associated with the user account, or visible to the public, or visible to friends/connections of the user). If the media destination is a storage device, the configurations 116a-c could specify information about a destination directory of the storage device.
The instructions specified for publishing media content under a set of media characteristics could specify one or more of several types of characteristics applicable to a particular kind of content. For example, if the media content is video, the characteristics could include frame size (pixel dimensions), aspect ratio, video compression/quality, soundtrack audio compression/quality, and file format. Video content may have other characteristics that can be specified, and other kinds of content may have other kinds of characteristics.
A user of the media authoring application 100 can activate one of the configurations 116a-c to publish media content 110 (e.g., media content authored in the media authoring application and selected in the media authoring application for publication). For example, the media authoring application may have a button or other control associated with the configuration, so that when a user activates the control, the selected media content is published to a media destination 112a-c specified by the configuration and using media characteristics specified by the configuration. The published media content (e.g., a data file representing the media content selected for publication) is generated by the media authoring application based on the media characteristics specified by the configuration.
In some implementations, the configurations 116a-c can each specify more than one media destination 112a-c. For example, a particular configuration may specify instructions for publishing the same media content 110 to a first media destination 112a and also to a second media destination 112b. In some examples, the same media characteristics could be used for multiple media destinations. In some examples, one set of media characteristics is specified for one media destination, and another set of media characteristics is specified for another media destination.
In some implementations, the media authoring application 100 enables the user 102 to share configurations 116a-c with other users. For example, the user 102 may generate a configuration in the media authoring application 100 and then use the media authoring application 100 to generate a data file 118 representing the configuration (e.g., containing data representing the configuration). The data file 118 can then be shared with other users. The user 102 could also accept data files from other users to add configurations 116a-c to the media authoring application 100.
The user interface 200 displays a share pane 210 enabling a user 102 (
In some implementations, a user can generate a data file representing the configuration options 218 for a chosen menu item 214. For example, the user can select and drag the menu item, e.g., to a file folder of a file system of the computer system 104 running the media authoring application 100 to share the configuration options 218 in the form of a data file. The data file can then be shared with other users and used with other instances of the media authoring application 100 used by the other users. Other users can use the configuration options 218 in the same manner as a user of this user interface 200 uses the configuration options 218.
Although the first media destination 228a and the second media destination 228b each represent different social networking services, in some examples, a configuration could specify two different sets of configuration options for the same media destination, e.g., the same social networking service. For example, a configuration could specify instructions for publishing media content to the same social networking service using two different user accounts or two different sets of media characteristics, for example.
In some implementations, the information about the video clip 402 may come in part from a data file associated with the video clip 402. For example, the video clip 402 could be stored in association with a data file representing a video project. A video project is a compilation of video clips, generally arranged by a user to form a coherent theme. For example, a video project could be a feature film, a television show, an interactive multimedia project, or other compilation of media. The video project may contain information about video clips, and the media authoring application 100 can identify this information in the video project and determine whether the information is relevant to a particular video clip 402 (e.g., a video clip stored in association with the video project). In some implementation, a user is sharing a video project, as opposed to just a video clip, and so the media authoring application 100 can identify information about the video project based on the video clips contained within the video project.
In some the information about the video clip 402 may come in part from a computer system (e.g., the computer system 104 shown in
The user interface also displays a control 412 that enables a user to publish the media content (e.g., the video clip 402) to a media destination, e.g., a “Share” button. When the user clicks (or otherwise invokes) the control 412, the media content is published to the media destination specified by the configuration represented by the user interface 400 and using the media characteristics specified by the configuration represented by the user interface 400.
The user interface 400 also displays media file characteristics 420 for the published video clip. The media file characteristics 420 describe characteristics of a data file generated when media content is published. For example, for a video clip, the media file characteristics 420 can include a predicted file size 422 for the published video clip, a running time 424 for the published video clip, a codec 426 used to generate the media file representing the published video clip, audio channel characteristics 428 of the published video clip (e.g., stereo or mono), and video definition 430 of the published video clip (e.g, 720p, 1080i, or other video definitions representing frame resolution, scan type, or other characteristics). The media file characteristics 420 allow a user to see, right in the user interface 400 of the media authoring application 100, characteristics of published media content expected by the media authoring application 100 (e.g., expected based on the chosen configuration for publishing the media content) before the media content is published.
The information about the video clip (or any other kind of media content) can also be saved for future publications of the video clip. For example, the second user interface 500 represents an information display 510 for the video clip referenced in the first user interface 400. The information display 510 enables a user to see information pertaining to a stored video clip, for example, information stored by the media authoring application 100 for retrieval when the video clip is accessed. The information display 510 includes share data 520 that can be used when the video clip represented by the information display 510 is published to a media destination. For example, the share data 520 can include a title 522 of the video clip, description 524 of the video in the video clip, a creator 526 of the video clip (e.g., who generated the file or filmed the video), and tags 528 indicating topics of the video clip. In some implementations, a user can input information to be used as some or all of the share data 520. In some implementations, some or all of the share data 520 can be determined based on information in the first user interface 400. For example, the title 404 for the video clip 402, description 406 of the video clip 402, author 408 of the video clip 402, and keywords 410 describing the video clip 402 can be entered by the user and carried over by the media authoring application 100 from the first user interface 400 into the share data 520 displayed in the second user interface 500. The share data 520 has fields corresponding to some or all of the fields displayed in the first user interface 400. For example, the title 404 of the video clip shown in the first user interface 400 can be carried over to the title 522 of the video clip shown in the second user interface 500. If information is stored as share data 520, when a user accesses the first user interface 400 for a clip, the share data 520 can be accessed and carried over to the first user interface 400 in the fields representing the corresponding type of information. For example, the tags 528 of the share data 520 can be carried over to the keywords 410 in the first user interface 500. A user may have the option of modifying any of the information shown in the first user interface 400 or the share data 520 shown in the second user interface, even if the information or share data was derived from another source (e.g., information about a video clip derived from the share data). The user interface 500 may also present a share count 530 indicating information about how many times the video clip has been published. Other information about when and to what media destinations the video content has been published could also be displayed.
Display device 1106 can be any known display technology, including but not limited to display devices using Liquid Crystal Display (LCD) or Light Emitting Diode (LED) technology. Processor(s) 1102 can use any known processor technology, including but are not limited to graphics processors and multi-core processors.
Input device 1104 can be any known input device technology, including but not limited to a keyboard (including a virtual keyboard), mouse, track ball, and touch-sensitive pad or display. In some implementations, the input device 1104 could include a microphone that facilitates voice-enabled functions, such as speech-to-text, speaker recognition, voice replication, digital recording, and telephony functions. The input device 1104 can be configured to facilitate processing voice commands, voiceprinting and voice authentication. In some implementations, audio recorded by the input device 1104 is transmitted to an external resource for processing. For example, voice commands recorded by the input device 1104 may be transmitted to a network resource such as a network server which performs voice recognition on the voice commands.
Bus 1112 can be any known internal or external bus technology, including but not limited to ISA, EISA, PCI, PCI Express, NuBus, USB, Serial ATA or FireWire. Computer-readable medium 1110 can be any medium that participates in providing instructions to processor(s) 1102 for execution, including without limitation, non-volatile storage media (e.g., optical disks, magnetic disks, flash drives, etc.) or volatile media (e.g., SDRAM, ROM, etc.).
Computer-readable medium 1110 can include various instructions 1114 for implementing an operating system (e.g., Mac OS®, Windows®, Linux). The operating system can be multi-user, multiprocessing, multitasking, multithreading, real-time and the like. The operating system performs basic tasks, including but not limited to: recognizing input from input device 1104; sending output to display device 1106; keeping track of files and directories on computer-readable medium 1110; controlling peripheral devices (e.g., disk drives, printers, etc.) which can be controlled directly or through an I/O controller; and managing traffic on bus 1112. Network communications instructions 1116 can establish and maintain network connections (e.g., software for implementing communication protocols, such as TCP/IP, HTTP, Ethernet, etc.).
A graphics processing system 1118 can include instructions that provide graphics and image processing capabilities. For example, the graphics processing system 1118 can display the user interfaces 200, 300, 500, 66 described with reference to
Application(s) 1120 can be the media authoring application 100 that implements the processes and displays the user interfaces described in reference to
The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language (e.g., Objective-C, Java), including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer will also include, or be operatively coupled to communicate with, one or more storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet.
The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
One or more features or steps of the disclosed embodiments can be implemented using an API. An API can define on or more parameters that are passed between a calling application and other software code (e.g., an operating system, library routine, function) that provides a service, that provides data, or that performs an operation or a computation.
The API can be implemented as one or more calls in program code that send or receive one or more parameters through a parameter list or other structure based on a call convention defined in an API specification document. A parameter can be a constant, a key, a data structure, an object, an object class, a variable, a data type, a pointer, an array, a list, or another call. API calls and parameters can be implemented in any programming language. The programming language can define the vocabulary and calling convention that a programmer will employ to access functions supporting the API.
In some implementations, an API call can report to an application the capabilities of a device running the application, such as input capability, output capability, processing capability, power capability, communications capability, etc.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.
Agnoli, Giovanni, Lipton, Daniel I., Pendergast, Colleen M., Patel, Harita J.
Patent | Priority | Assignee | Title |
10534508, | Oct 19 2012 | Apple Inc. | Sharing media content |
Patent | Priority | Assignee | Title |
6073110, | Jul 22 1997 | SIEMENS INDUSTRY, INC | Activity based equipment scheduling method and system |
6597375, | Mar 10 2000 | Adobe Systems Incorporated | User interface for video editing |
7562311, | Feb 06 2006 | Verizon Patent and Licensing Inc | Persistent photo tray |
8156442, | Jun 30 2008 | Nokia Technologies Oy | Life recorder and sharing |
8175395, | Sep 26 2001 | Interact Devices, Inc. | System and method for dynamically switching quality settings of a codec to maintain a target data rate |
8869068, | Nov 22 2011 | Backplane, Inc. | Content sharing application utilizing radially-distributed menus |
9153000, | Dec 13 2010 | Microsoft Technology Licensing, LLC | Presenting content items shared within social networks |
20020056123, | |||
20040136698, | |||
20050237567, | |||
20070101271, | |||
20070162855, | |||
20080013916, | |||
20090144392, | |||
20090249427, | |||
20100332981, | |||
20110008023, | |||
20110060994, | |||
20110161818, | |||
20120137237, | |||
20130091444, | |||
20130346877, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 19 2012 | Apple Inc. | (assignment on the face of the patent) | / | |||
Dec 19 2012 | PENDERGAST, COLLEEN M | Apple Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029593 | /0175 | |
Dec 19 2012 | LIPTON, DANIEL I | Apple Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029593 | /0175 | |
Dec 19 2012 | AGNOLI, GIOVANNI | Apple Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029593 | /0175 | |
Dec 19 2012 | PATEL, HARITA | Apple Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029593 | /0175 |
Date | Maintenance Fee Events |
May 17 2017 | ASPN: Payor Number Assigned. |
Sep 25 2020 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Jun 20 2020 | 4 years fee payment window open |
Dec 20 2020 | 6 months grace period start (w surcharge) |
Jun 20 2021 | patent expiry (for year 4) |
Jun 20 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 20 2024 | 8 years fee payment window open |
Dec 20 2024 | 6 months grace period start (w surcharge) |
Jun 20 2025 | patent expiry (for year 8) |
Jun 20 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 20 2028 | 12 years fee payment window open |
Dec 20 2028 | 6 months grace period start (w surcharge) |
Jun 20 2029 | patent expiry (for year 12) |
Jun 20 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |