US20130147905A1 - Processing media streams during a multi-user video conference - Google Patents
Processing media streams during a multi-user video conference Download PDFInfo
- Publication number
- US20130147905A1 US20130147905A1 US13/324,897 US201113324897A US2013147905A1 US 20130147905 A1 US20130147905 A1 US 20130147905A1 US 201113324897 A US201113324897 A US 201113324897A US 2013147905 A1 US2013147905 A1 US 2013147905A1
- Authority
- US
- United States
- Prior art keywords
- user
- audio
- stream
- video
- media
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/157—Conference systems defining a virtual conference space and using avatars or agents
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/155—Conference systems involving storage of or access to video conference sessions
Definitions
- Embodiments relate generally to video conferencing, and more particularly to processing media streams during a multi-user video conference.
- Video conferencing is often used in business settings and enables participants to share video and audio content with each other in real-time across geographically dispersed locations.
- a communication device at each location typically uses a video camera and microphone to send video and audio streams, and uses a video monitor and speaker to play received video and audio streams.
- Video conferencing involves digital compression of video and audio streams, which are transmitted in real-time across a network from one location to another. The communication devices perform the compressing and decompressing of the video and audio streams, and maintain the data linkage via the network.
- Embodiments generally relate to processing media streams during a multi-user video conference.
- a method includes obtaining at least one audio file and obtaining one or more parameters from a remote user. The method also includes adding user-specified audio content from the at least one audio file to a media stream based on the one or more parameters.
- FIG. 1 illustrates a block diagram of an example network environment, which may be used to implement the embodiments described herein.
- FIG. 2 illustrates an example simplified flow diagram for increasing end-user engagement in a social network, according to one embodiment.
- FIG. 3 illustrates an example simplified graphical user interface (GUI), according to one embodiment.
- GUI graphical user interface
- FIG. 4 illustrates an example GUI, according to one embodiment.
- FIG. 5 illustrates an example simplified flow diagram for increasing end-user engagement in a social network, according to one embodiment.
- FIG. 6 is a block diagram of an example server device, which may be used to implement the embodiments described herein.
- Embodiments described herein increase end-user engagement in a social network by enabling end-users to add media content items to media streams.
- an end-user may add images or videos to frames of media streams and add audio content to the media streams in real-time during a multi-user video conference.
- a system obtains one or more frames from a media stream, which may include video streams, audio streams, outbound streams, and inbound streams.
- the one or more frames may include a face of a user.
- the system determines coordinates within the one or more frames using a face detection algorithm, where the coordinates are facial coordinates.
- the system also obtains an image, which can come from the system, a third-party application, or an end-user.
- the system also obtains one or more parameters from a remote user, which may be a third-party developer or an end-user. Such parameters may include, for example, where to place the image in the one or more frames.
- the system then adds the image to the one or more frames based on the coordinates and the one or more parameters.
- the system may add the image by overlaying the image on the one or more frames.
- the system may add the image by replacing at least a portion of the one or more frames with the image.
- the system scales and rotates the image based on each of the one or more frames.
- the system may preload images and associated parameters prior to adding the image to media streams.
- the system also enables the end-users to add video content by decoding and overlaying video stream on to the media stream.
- the system may preload video files that include video content and preload associated parameters prior to adding the video content to media streams.
- the system also enables end-users to add audio content to media streams in real-time.
- the system may also preload audio files that include audio content and preload associated parameters prior to adding the audio content to media streams.
- FIG. 1 illustrates a block diagram of an example network environment 100 , which may be used to implement the embodiments described herein.
- network environment 100 includes a system 102 , which includes a server device 104 and a social network database 106 .
- Network environment 100 also includes client devices 112 , 114 , 116 , and 118 , which may communicate with each other via system 102 and a network 120 .
- FIG. 1 shows one block for each of system 102 , server device 104 , social network database 106 , and shows four blocks for client devices 112 , 114 , 116 , and 118 .
- Blocks 102 , 104 , and 106 may represent multiple systems, server devices, and social network databases. Also, there may be any number of client devices.
- network environment 100 may not have all of the components shown and/or may have other elements including other types of elements instead of, or in addition to, those shown herein.
- end-users U 2 , U 4 , U 6 , and U 8 may communicate with each other using respective client devices 112 , 114 , 116 , and 118 .
- end users U 2 , U 4 , U 6 , and U 8 may interact with each other in a multi-user video conference, where respective client devices 112 , 114 , 116 , and 118 transmit media streams to each other.
- the media streams may include different types of media streams (e.g., one or more video streams and/or one or more audio streams).
- such media streams may include video streams that display end-users U 2 , U 4 , U 6 , and U 8 , and may include associated audio streams.
- the media streams may include media streams being transmitted in different directions (e.g., one or more outbound streams and/or one or more inbound streams) relative to each client device 112 , 114 , 116 , and 118 .
- FIG. 2 illustrates an example simplified flow diagram for increasing end-user engagement in a social network, according to one embodiment.
- the method is initiated in block 202 , where system 102 obtains one or more frames from a media stream.
- the media stream may include one or more video streams and/or one or more audio streams, and the media stream may include one or more outbound streams and/or one or more inbound streams.
- the media stream may be a media stream in a multi-user video conference.
- FIG. 3 illustrates an example simplified graphical user interface (GUI) 300 , according to one embodiment.
- GUI 300 includes video windows 302 , 304 , 306 , and 308 , which display video streams of respective end-users U 2 , U 4 , U 6 , and U 8 who are participating in the multi-user video conference.
- end-users U 2 , U 4 , U 6 , and U 8 are shown.
- GUI 300 includes a main video window 316 , which displays a video stream of the user who is currently speaking.
- main video window 316 is displaying a video stream of end-user U 6 , who is the current speaker.
- main video window 316 is a larger version of the corresponding video window (e.g., video window 306 ).
- main video window 316 may be larger than the other video windows 302 , 304 , 306 , and 308 , and may be centralized in the GUI to visually indicate that the end-user shown in main video window 316 is speaking.
- the video stream displayed in main video window 316 switches to a different video stream associated with another end-user each time a different end-user speaks.
- GUI 300 also includes a control window 320 , which includes control buttons 330 (enclosed in dotted lines). For ease of illustration, eight control buttons are shown. The number of control buttons may vary depending on the specific implementation. The functionality of control buttons 330 also vary depending on the specific implementation. Various functions of control buttons 330 are described in more detail below.
- system 102 determines coordinates within one or more frames.
- system 102 may determine coordinates using a face detection algorithm. For example, system 102 passes the one or more frames through the face detection algorithm, and the face detection algorithm calculates coordinates of various visual elements of the frame. For ease of illustration, such visual elements may be referred to as background images or base images.
- such base images may include, for example, the face of a user, and such coordinates within the given frame may be referred to facial coordinates.
- the face of the user may include the head, eyes, nose, ears, and mouth of the user, etc.
- These base images are examples, and other embodiments may not have all the base images listed and/or may have other base images instead of, or in addition to, those listed above (e.g., hair, glasses, clothing, etc.).
- system 102 may determine coordinates using various algorithms. For example, system 102 may determine coordinates associated with sections of the least one frame. Such sections may include, for example, four quadrants in a given frame, or any predetermined number of sections and/or predetermined areas of a given frame. In one embodiment, system 102 may determine coordinates associated with particular locations of pixels or groups of pixels in a given frame.
- system 102 obtains one or more media content items to be added to the one or more frames.
- the one or more media content items are user-specified media content items that may include one or more user-specified images and/or one or more user-specified video content items.
- the one or more media content items may include at least one image (e.g., antlers).
- the one or more media content items may include two or more different images (e.g., antlers and a mustache).
- the one or more media content items may include at least one video (e.g., video displaying a sleigh being pulled by reindeer).
- the one or more media content items may include two or more videos (e.g., video displaying a sleigh being pulled by reindeer and a star).
- system 102 may obtain any combination of these media content items (e.g., one or more images, and/or one or more videos, etc.).
- the media content items are media content items that an end-user might want to add to frames of media streams during a multi-user video conference.
- images to be added may include a mustache, eye glasses, hats, antlers, etc.
- Videos to be added may include a sleigh being pulled by reindeer, a star twinkling in the sky, snow falling from the sky, etc.
- system 102 may obtain the one or more media content items from a memory or library of system 102 , from the Internet, or any other location.
- System 102 may also obtain the image from a third-party application or an end-user (e.g., end-users U 2 , U 4 , U 6 , and U 8 ). Images may be stored as image files, and videos may be stored as video files.
- an end-user may provide the one or more media content items to system 102 using a third-party application or using an application provided by system 102 .
- system 102 obtains one or more parameters from a remote user.
- the remote user is a user who is remote from system 102 .
- the remote user may be a third-party developer.
- the third-party developer may use a third-party application or an application provided by system 102 to transmit the one or more parameters to system 102 .
- the remote user may be an end-user (e.g., end-users U 2 , U 4 , U 6 , and U 8 ).
- the end-user may use a third-party application or an application provided by system 102 to transmit the one or more parameters to system 102 .
- system 102 enables the remote user to specify with one or more parameters the location where the image is to be added to the frame.
- the remote user may specify that a given image such as antlers is to be added to specific coordinates in the frame (e.g., coordinates at the top of a head shown in the frame).
- Parameters may also include offset parameters.
- a remote user may specify that an image such as a mustache be added at a predetermined number of pixels or at a predetermined distance above the lips in a frame.
- other parameters are described in detail below.
- system 102 adds the one or more content items to the one or more frames based on the coordinates and the one or more parameters. In various embodiments, system 102 performs the adding of the one or more content items in real-time during a multi-user video conference.
- system 102 may obtain the image and one or more parameters before obtaining the one or more frames and determining the coordinates.
- system 102 may obtain the one or more parameters and one or more frames, and determine coordinates before obtaining the image.
- system 102 may obtain the one or more parameters before obtaining the image.
- Other orderings of the steps are possible, depending on the particular implementation. In some particular embodiments, multiple steps shown as sequential in this specification may be performed at the same time.
- FIG. 4 illustrates the example GUI 300 of FIG. 3 , according to one embodiment.
- FIG. 4 shows GUI 300 , and video windows 302 , 304 , 306 , and 308 , which display video streams of respective end-users U 2 , U 4 , U 6 , and U 8 .
- FIG. 4 also shows main video window 316 , which displays a video stream of user U 6 , who is currently speaking
- FIG. 4 also shows control window 320 , which includes control buttons 330 .
- system 102 has added an image of antlers 404 , 406 , and 408 to the frames of the video streams displayed in video windows 304 , 306 , and 308 , as well as in the video stream displayed in main video window 316 .
- system 102 performs the adding of the image when system 102 receives a command from an end-user to add the image. While many example embodiments described herein are described in the context of images, embodiments described herein also apply to video content.
- FIG. 4 also shows a sleigh 410 being pulled across main video window 316 and corresponding video window 306 .
- sleigh 410 is video content that system 102 adds to a video stream. Further embodiments involving video content are described in more detail below.
- media content items may also include audio content.
- the method embodiments described in connection to FIG. 2 may also include steps enabling an end-user in a multi-user video conference to also add audio content to the media stream.
- system 102 may obtain one or more audio files, and add audio content from the one or more audio files to the media stream based on the one or more parameters.
- Other embodiments for adding audio content to media streams are described in detail below in connection with FIG. 5 .
- system 102 has received the command from end-user U 2 , who desired to add antlers to the heads of end-users U 4 , U 6 , and U 8 .
- end-user U 2 may provide the command to system 102 by using control buttons 330 in control window 320 .
- one of the control buttons 330 may have a label that reads “antlers.”
- system 102 adds antlers to the frames associated with the other end-users U 4 , U 6 , and U 8 .
- end-user U 2 may remove the image by again selecting or clicking (e.g., toggling) the same control button 330 .
- different control buttons 330 may be associated with different images that may be added to frames of the video streams.
- one of the control buttons 330 may be selected to display a drop-down menu (or any suitable selection mechanism), and the drop-down menu may display additional control buttons for various images that may be added to the frames of video streams.
- embodiments enable an end-user, via system 102 , to modify frames of different types of streams (e.g., video streams, audio streams, etc.) and streams being transmitted in different directions (e.g., outbound streams and/or inbound streams).
- streams e.g., video streams, audio streams, etc.
- streams being transmitted in different directions (e.g., outbound streams and/or inbound streams).
- each end-user can use control buttons 330 to specify the device where a particular image is to be added to the one or more frames.
- an end-user may specify that the image (e.g., antlers) is to be added at one or more receiving client devices.
- system 102 causes the client device of each end-user to add the image.
- the end-user may specify that the image (e.g., antlers) is to be added at the sending client device.
- system 102 causes the sending client device to add the image, where the sending device broadcasts the video streams with the image already added to the frames of the broadcasted video stream. The end-user may make these selections using the appropriate control buttons 330 .
- system 102 may display a drop-down menu listing all end-users currently participating in the multi-user video conference.
- the end-user who selected the control button may select which client devices add the image by selecting the control buttons 330 associated with the corresponding end-users.
- the end-user who selected the control button 330 may also select which end-users can see the added image.
- an end-user e.g., end-user U 2
- the end-user e.g., end-user U 2
- system 102 may display a drop-down menu listing all end-users currently participating in the multi-user video conference. The end-user who selected the control button may then select which end-users can see the image.
- system 102 shows antlers 404 , 406 , and 408 in respective video windows 304 , 306 , and 308 , and in main video window 316 , but not in video window 302 .
- system 102 may preload media content (e.g., images, video content, audio content) and parameters prior to adding the media content during a multi-user video conference. For example, system 102 may preload media content items and parameters before a given multi-user video conference begins. This minimizes latency so that when the multi-user video conference begins, system 102 is ready to add media content items in real-time as soon as system 102 receives commands to add the media content items.
- media content e.g., images, video content, audio content
- parameters prior to adding the media content during a multi-user video conference.
- system 102 may preload media content items and parameters before a given multi-user video conference begins. This minimizes latency so that when the multi-user video conference begins, system 102 is ready to add media content items in real-time as soon as system 102 receives commands to add the media content items.
- system 102 may search the Internet or any suitable source of the media content item in real-time after receiving a command to add a particular media content item.
- the time to find the media content item may vary (e.g., 10 ms, 1 second, or more), depending on the network speed.
- system 102 may immediately add the media content item to the frames or may first prompt the end-user for an approval or command to add the media content item.
- a control button 330 may already have an identifier associated with a media content item to be obtained from any suitable source (e.g., server 102 , third-party application, or end-user).
- a control button 330 when selected may cause a field to be displayed, where the end-user may type in an identifier (e.g., “antlers”).
- server 102 tracks changes in frames in order to adjust images that were added to those frames. Server 102 performs these adjustments in real-time. For example, if the base image (e.g., head) of the frame moves up or down from frame to frame, the added image (e.g., antlers) moves up or down, accordingly. Also, if the base image (e.g., head) of the frame moves left or right from frame to frame, the added image (e.g., antlers) moves left or right, accordingly.
- the base image e.g., head
- the added image e.g., antlers
- system 102 may enable the remote user to specify various parameters that may affect such adjustments. For example, the remote user may specify that system 102 make such adjustments of the added image as quickly as the next frame changes. Such quick adjustments may require more system resources but would minimize jitter in the video stream.
- system 102 scales an added media content item (e.g., added image) based on the frame. For example, if a given user moves closer to the user's camera, the user's face would become bigger relative to the size of the video window.
- System 102 scales the added media content item accordingly. For example, with an added image, if the size of the base image (e.g., head or face of the user) increases by a particular percentage (e.g., 5%), the size of the added image (e.g., antlers) also increases by the same percentage (e.g., 5%). Conversely, if the size of the base image decreases by a particular percentage, the size of the added image also decreases by the same percentage.
- a particular percentage e.g., 5%
- system 102 rotates an added image based on the frame. For example, if a given user rotates relative to the user's camera, the user's face would also rotate relative to the video window. System 102 rotates the added image accordingly. In other words, if the orientation of the base image (e.g., head or face of the user) rotates by a particular number of degrees (e.g., 30 degrees to the left or right), the added image (e.g., antlers) also rotates by the same number of degrees in the same direction (e.g., 30 degrees to the left or right).
- the base image e.g., head or face of the user
- the added image e.g., antlers
- system 102 may crop an added media content item (e.g., added image) based on the frame. For example, if system 102 shows antlers on the head of another end-user being relatively close to the top of the video window, the antlers may not fit in the video window. As such, system 102 would crop the top of the antlers accordingly. Similarly, if video content such as sleigh 410 moves off the screen, system 102 may crop the video as sleigh 410 moves off the screen.
- an added media content item e.g., added image
- the remote user may not want the media content item to be cropped and may want to the media content item to remain within the frame.
- system 102 may keep the image in one the same place on the screen (e.g., upper right-hand corner, bottom center, etc.).
- system 102 may adjust the position of the image based on the frame, but keep the image in the frame. For example, if the image is a text bubble, system 102 may adjust the position of the text bubble to follow the face of the associated end-user. If the end-user were to stand up, causing the face of the end-user to move up relative to the video window, the text bubble would also move up but would stop at the top edge of the video window.
- system 102 may adjust the video graphics array (VGA) of the added image based on the VGA of the frame.
- system 102 would scale the VGA of the added image up or down in order to match the VGA of the frame. For example, if the antler image in the example above had a VGA of 640 ⁇ 480, and the frame had a VGA 320 ⁇ 180, system 102 may scale the VGA of the antler image to 320 ⁇ 180 so that they match.
- system 102 may add an image to one or more frames by overlaying the image on the one or more frames. In another embodiment, system 102 may add an image to one or more frames by replacing at least a portion of the one or more frames with the image.
- system 102 may add an image to one or more frames that include an avatar. Such an embodiment may be implemented if the end-user does not have a video camera available. System 102 may transmit an avatar in lieu of a video of the actual face of an end-user. System 102 may overlay a portion of the frame or replace a portion of the frame (e.g., the mouth portion of the avatar) with an image of a mouth, where the image of the mouth moves and synchronizes with the voice of the end-user. In other words, the mouth lip syncs with the speaking voice of the end-user. In one embodiment, the mouth may open by different amounts based on the audio level in the audio stream.
- a given user may elect to use an avatar even if the given user has a video camera available.
- system 102 or any third-party application may provide one or more avatars for the user to select. In one embodiment, if there are multiple avatars, system 102 may provide the user with a menu of avatars to select.
- images of end-users U 2 , U 4 , U 6 , and U 8 shown in respective video windows 302 , 304 , 306 , and 308 of FIGS. 3 and 4 are conceptual and represent realistic images of end-users. Any one of these images may also represent an avatar.
- the media content items added to the video stream may also include video content.
- system 102 may obtain one or more video files, and may add video content from the one or more video files to the media stream based on one or more parameters.
- system 102 may obtain the one or more video files from a memory or library of system 102 , from the Internet, or any other location.
- System 102 may also obtain the one or more video files from a third-party application or an end-user (e.g., end-users U 2 , U 4 , U 6 , and U 8 ).
- the video content may include sleigh 410 being pulled by reindeer, and the remote user may set parameters associated with the video content.
- the remote user may be a third-party developer or an end-user.
- the remote user may set parameters such that sleigh 410 is to be pulled across a video window when an end-user selects a button 330 in a multi-user video conference.
- system 102 may allow the end-user to select which video windows are to display the video content (e.g., via a pull-down menu or any suitable selection mechanism). For example, in FIG.
- the end-user may choose to display sleigh 410 in main video window 316 and the video window (e.g., video window 306 ) corresponding to the end-user (e.g., end-user U 6 ) being displayed in main video window 316 .
- the end-user may choose to display sleigh 410 in any video window associated with a particular end-user (e.g., main video window 316 and video window 306 associated with end-user U 6 ).
- system 102 may add video content to the media stream by overlaying the video content into a video stream. In another embodiment, system 102 may add video content to the media stream by replacing at least a portion of a video stream with the video content.
- system 102 may enable an end-user to specify which other end-users see the video content.
- an end-user e.g., end-user U 2
- the end-user e.g., end-user U 2
- system 102 may display a drop-down menu (or any suitable selection mechanism) listing all end-users currently participating in the multi-user video conference. The end-user who selected the control button may then select which end-users can see the video content.
- system may allow the end-user to add any combination of image, audio, or video content to the media stream.
- system 102 may also obtain one or more audio files, and may add audio content from the one or more audio files to the media stream based on the one or more parameters.
- FIG. 5 illustrates an example simplified flow diagram for increasing end-user engagement in a social network, according to one embodiment.
- the method is initiated in block 502 , where system 102 obtains one or more audio files.
- the one or more audio files are user-specified audio files that include user-specified audio content.
- system 102 may obtain the one or more audio files from a memory or library of system 102 , from the Internet, or any other location.
- System 102 may also obtain the one or more audio files from a third-party application or an end-user (e.g., end-users U 2 , U 4 , U 6 , and U 8 ).
- an end-user may provide the one or more audio files to system 102 using a third-party application or using an application provided by system 102 .
- audio content may include the sound of a chime, and the remote user may set parameters such that the chime sounds when an end-user enters a multi-user video conference.
- system 102 obtains one or more parameters from a remote user.
- the remote user is a third-party developer.
- the third-party developer may use a third-party application or an application provided by system 102 to transmit the one or more parameters to system 102 .
- the remote user is an end-user.
- the end-user may use a third-party application or an application provided by system 102 to transmit the one or more parameters to system 102 .
- the one or more parameters may include one or more sound control parameters, trigger parameters, and special effects parameters.
- the sound parameters may modify sound characteristics of the audio content.
- Such sound parameters may include, for example, pitch, loudness, and quality such as timbre.
- Trigger parameters may modify when and often a particular audio content item is played.
- trigger parameters may enable looping of audio content.
- Special effects parameters may modify a sound such as the voice of an end-user during a multi-user video conference.
- special effects parameters may change the voice of an end-user (e.g., lower, raise, speed up, slow down a users voice, etc.).
- system 102 may add audio content from the at least one audio file to a media stream based on the one or more parameters. As indicated above, in one embodiment, system 102 adds the audio content in real-time during a multi-user video conference. In one embodiment, system 102 may add audio content from two or more different audio files to a media stream based on the one or more parameters. For example, the two or more different audio files may include chimes, ringing sleigh bells, music, etc. As indicated above, the media stream may include one or more outbound streams and/or one or more inbound streams. In various embodiments, the media stream may be a media stream in a multi-user video conference.
- system 102 may add audio content to the media stream by mixing the audio content into an audio stream. In another embodiment, system 102 may add audio content to the media stream by replacing at least a portion of an audio stream with the audio content.
- system 102 may enable an end-user to specify which other end-users hear the audio content.
- system 102 may enable an end-user to select a control button 330 that enables all end-users to hear the audio content.
- an end-user e.g., end-user U 2
- system 102 may enable an end-user to select a control button 330 that enables particular end-users to hear the audio content.
- the end-user may select a control button 330 so that selected end-users (e.g., end-users U 2 and U 4 ) hear the audio content.
- system 102 may display a drop-down menu (or any suitable selection mechanism) listing all end-users currently participating in the multi-user video conference. The end-user who selected the control button may then select which end-users can hear the audio content.
- system 102 may enable an end-user to create one or more groups of end-users. These groups are user-specified such that an end-user can associate one or more other end-users with each group and give each group a label (e.g., a family group, a friends group, a co-workers group, etc.). As such, a given group may be a subset of all end-users. Also, a given end-user may be associated with one or more groups. In one embodiment, system 102 may enable an end-user to select a control button 330 that enables one or more of the groups to hear the audio content. Accordingly, in a given multi-user video conference, different groups of end-users may hear different audio content based on end-user selections.
- groups of end-users may hear different audio content based on end-user selections.
- system 102 may also obtain one or more media content items in addition to audio content.
- the one or more media content items may include one or more images (e.g., antlers).
- the one or more media content items may include two or more different images (e.g., antlers and a mustache).
- the one or more media content items may include one or more videos (e.g., a video of a sled flying in the background).
- system 102 may add the one or more media content items to the media stream based on the one or more parameters.
- system 102 may add any one more different types of media content (e.g., an audio content item, an image content item, a video content item, or any combination thereof) to the same media stream.
- system 102 may add the any number of media content items of any of these media content types to the same media stream.
- the one or more parameters may specify that particular audio content be played in combination with one or more particular actions. In one embodiment, such actions may include an end-user selecting a particular control button 330 .
- system 102 may cause a user-specified (e.g., chime, ring, etc.) to sound when the media content is added. For example, a bell may ring when a mustache is added to the face of an end-user, etc.
- a media content item e.g., mustache, eye glasses, hat, etc.
- system 102 may cause a user-specified (e.g., chime, ring, etc.) to sound when the media content is added.
- a bell may ring when a mustache is added to the face of an end-user, etc.
- the one or more parameters may specify that particular audio content be played in combination with one or more particular actions associated with an image or a video content.
- Such actions may include an image changing (e.g., a number incrementing). For example, a bell may ring when a score on a scoreboard reaches a certain number, etc.
- Such actions may include particular movements in a video. For example, bells may ring whenever a particular end-user moves their head, etc.
- developers such as third-party developers and end-users may provide different types of media content (e.g., images, video content, and audio content) may provide associated parameters to add such media content to media streams.
- a third-party developer may write applications such as games for use in a multi-user video conference. Such games may be used by end-users to add images such as icons, score boards, locations of users, and other informational images to frames of video streams or to play humorous sound effects through audio streams of the multi-user video conference.
- system 102 may enable end-users in a multi-user video conference to throw objects such as pies at each other, trying to hit each other's faces before they duck out of the way. In one embodiment, a successful hit may result in a “splat” sound.
- system 102 may operate with applications having seasonal themes. For example, system 102 may enable end-users to choose to have reindeer antlers/nose during the month of December. In another example, system 102 may enable end-users to choose to have a mustache added during the month of November. In yet another example, system 102 may enable end-users to enable a Santa's beard, mix audio of holiday music, and have reindeer pull a sleigh across the stream.
- Embodiments described herein provide various benefits. For example, enabling end-users in a multi-user video conferencing system to alter the video and/or audio streams increases engagement of the end-users in the video conference. Embodiments described herein also increase overall engagement among end-users in a social networking environment.
- system 102 is described as performing the steps as described in the embodiments herein, any suitable component or combination of components of system 102 or any suitable processor or processors associated with system 102 may perform the steps described.
- FIG. 6 is a block diagram of an example server device 600 , which may be used to implement the embodiments described herein.
- server device 600 may be used to implement server device 104 of FIG. 1 , as well as to perform the method embodiments described herein.
- server device 600 includes a processor 602 , an operating system 604 , a memory 606 , and input/output (I/O) interface 608 .
- Server device 600 also includes a social network engine 610 and a media application 612 , which may be stored in memory 606 or on any other suitable storage location or computer-readable medium.
- Media application 612 provides instructions that enable processor 602 to perform the functions described herein and other functions.
- FIG. 6 shows one block for each of processor 602 , operating system 604 , memory 606 , social network engine 610 , media application 612 , and I/O interface 608 .
- These blocks 602 , 604 , 606 , 608 , 610 , and 612 may represent multiple processors, operating systems, memories, I/O interfaces, social network engines, and media applications.
- server device 600 may not have all of the components shown and/or may have other elements including other types of elements instead of, or in addition to, those shown herein.
- routines of particular embodiments may be implemented using any suitable programming language and programming techniques. Different programming techniques may be employed such as procedural or object-oriented.
- the routines may execute on a single processing device or multiple processors. Although the steps, operations, or computations may be presented in a specific order, the order may be changed in different particular embodiments. In some particular embodiments, multiple steps shown as sequential in this specification may be performed at the same time.
- a “processor” includes any suitable hardware and/or software system, mechanism or component that processes data, signals or other information.
- a processor may include a system with a general-purpose central processing unit, multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a geographic location, or have temporal limitations. For example, a processor may perform its functions in “real-time,” “offline,” in a “batch mode,” etc. Portions of processing may be performed at different times and at different locations, by different (or the same) processing systems.
- a computer may be any processor in communication with a memory.
- the memory may be any suitable processor-readable storage medium, such as random-access memory (RAM), read-only memory (ROM), magnetic or optical disk, or other tangible media suitable for storing instructions for execution by the processor.
Abstract
Description
- The present application is related to co-pending U.S. patent application Ser. No. ______, filed on Dec. 13, 2011, entitled “Processing Media Streams During a Multi-User Video Conference,” which is incorporated by reference herein.
- Embodiments relate generally to video conferencing, and more particularly to processing media streams during a multi-user video conference.
- Video conferencing is often used in business settings and enables participants to share video and audio content with each other in real-time across geographically dispersed locations. A communication device at each location typically uses a video camera and microphone to send video and audio streams, and uses a video monitor and speaker to play received video and audio streams. Video conferencing involves digital compression of video and audio streams, which are transmitted in real-time across a network from one location to another. The communication devices perform the compressing and decompressing of the video and audio streams, and maintain the data linkage via the network.
- Embodiments generally relate to processing media streams during a multi-user video conference. In one embodiment, a method includes obtaining at least one audio file and obtaining one or more parameters from a remote user. The method also includes adding user-specified audio content from the at least one audio file to a media stream based on the one or more parameters.
-
FIG. 1 illustrates a block diagram of an example network environment, which may be used to implement the embodiments described herein. -
FIG. 2 illustrates an example simplified flow diagram for increasing end-user engagement in a social network, according to one embodiment. -
FIG. 3 illustrates an example simplified graphical user interface (GUI), according to one embodiment. -
FIG. 4 illustrates an example GUI, according to one embodiment. -
FIG. 5 illustrates an example simplified flow diagram for increasing end-user engagement in a social network, according to one embodiment. -
FIG. 6 is a block diagram of an example server device, which may be used to implement the embodiments described herein. - Embodiments described herein increase end-user engagement in a social network by enabling end-users to add media content items to media streams. For example, an end-user may add images or videos to frames of media streams and add audio content to the media streams in real-time during a multi-user video conference. As described in more detail below, in one embodiment, a system obtains one or more frames from a media stream, which may include video streams, audio streams, outbound streams, and inbound streams. The one or more frames may include a face of a user. The system determines coordinates within the one or more frames using a face detection algorithm, where the coordinates are facial coordinates. The system also obtains an image, which can come from the system, a third-party application, or an end-user. The system also obtains one or more parameters from a remote user, which may be a third-party developer or an end-user. Such parameters may include, for example, where to place the image in the one or more frames.
- The system then adds the image to the one or more frames based on the coordinates and the one or more parameters. In one embodiment, the system may add the image by overlaying the image on the one or more frames. In another embodiment, the system may add the image by replacing at least a portion of the one or more frames with the image. The system scales and rotates the image based on each of the one or more frames. The system may preload images and associated parameters prior to adding the image to media streams.
- In one embodiment, the system also enables the end-users to add video content by decoding and overlaying video stream on to the media stream. The system may preload video files that include video content and preload associated parameters prior to adding the video content to media streams.
- In one embodiment, the system also enables end-users to add audio content to media streams in real-time. The system may also preload audio files that include audio content and preload associated parameters prior to adding the audio content to media streams.
-
FIG. 1 illustrates a block diagram of anexample network environment 100, which may be used to implement the embodiments described herein. In one embodiment,network environment 100 includes asystem 102, which includes aserver device 104 and asocial network database 106.Network environment 100 also includesclient devices system 102 and anetwork 120. - For ease of illustration,
FIG. 1 shows one block for each ofsystem 102,server device 104,social network database 106, and shows four blocks forclient devices Blocks network environment 100 may not have all of the components shown and/or may have other elements including other types of elements instead of, or in addition to, those shown herein. - In various embodiments, end-users U2, U4, U6, and U8 may communicate with each other using
respective client devices respective client devices - In various embodiments, the media streams may include different types of media streams (e.g., one or more video streams and/or one or more audio streams). For example, such media streams may include video streams that display end-users U2, U4, U6, and U8, and may include associated audio streams. Also, the media streams may include media streams being transmitted in different directions (e.g., one or more outbound streams and/or one or more inbound streams) relative to each
client device -
FIG. 2 illustrates an example simplified flow diagram for increasing end-user engagement in a social network, according to one embodiment. Referring to bothFIGS. 1 and 2 , the method is initiated inblock 202, wheresystem 102 obtains one or more frames from a media stream. As indicated above, the media stream may include one or more video streams and/or one or more audio streams, and the media stream may include one or more outbound streams and/or one or more inbound streams. Various example applications involving these different types of streams are described in more detail below. In various embodiments, the media stream may be a media stream in a multi-user video conference. -
FIG. 3 illustrates an example simplified graphical user interface (GUI) 300, according to one embodiment. In one embodiment, GUI 300 includesvideo windows - In one embodiment, GUI 300 includes a
main video window 316, which displays a video stream of the user who is currently speaking. As shown inFIG. 3 , in this particular example,main video window 316 is displaying a video stream of end-user U6, who is the current speaker. In one embodiment,main video window 316 is a larger version of the corresponding video window (e.g., video window 306). In one embodiment,main video window 316 may be larger than theother video windows main video window 316 is speaking. In one embodiment, the video stream displayed inmain video window 316 switches to a different video stream associated with another end-user each time a different end-user speaks. - In one embodiment,
GUI 300 also includes acontrol window 320, which includes control buttons 330 (enclosed in dotted lines). For ease of illustration, eight control buttons are shown. The number of control buttons may vary depending on the specific implementation. The functionality ofcontrol buttons 330 also vary depending on the specific implementation. Various functions ofcontrol buttons 330 are described in more detail below. - Referring still to
FIGS. 1 and 2 , inblock 204,system 102 determines coordinates within one or more frames. In one embodiment,system 102 may determine coordinates using a face detection algorithm. For example,system 102 passes the one or more frames through the face detection algorithm, and the face detection algorithm calculates coordinates of various visual elements of the frame. For ease of illustration, such visual elements may be referred to as background images or base images. - In a given frame, such base images may include, for example, the face of a user, and such coordinates within the given frame may be referred to facial coordinates. In one embodiment, the face of the user may include the head, eyes, nose, ears, and mouth of the user, etc. These base images are examples, and other embodiments may not have all the base images listed and/or may have other base images instead of, or in addition to, those listed above (e.g., hair, glasses, clothing, etc.).
- The coordinates specify where these base images are located in the one or more frames. In one embodiment, the coordinates may include Cartesian coordinates or coordinates of any other suitable coordinate system to show the location of each base image in the frame. In various embodiments,
system 102 may determine coordinates using various algorithms. For example,system 102 may determine coordinates associated with sections of the least one frame. Such sections may include, for example, four quadrants in a given frame, or any predetermined number of sections and/or predetermined areas of a given frame. In one embodiment,system 102 may determine coordinates associated with particular locations of pixels or groups of pixels in a given frame. - In
block 206,system 102 obtains one or more media content items to be added to the one or more frames. In one embodiment, the one or more media content items are user-specified media content items that may include one or more user-specified images and/or one or more user-specified video content items. For example, in one embodiment, the one or more media content items may include at least one image (e.g., antlers). In one embodiment, the one or more media content items may include two or more different images (e.g., antlers and a mustache). In one embodiment, the one or more media content items may include at least one video (e.g., video displaying a sleigh being pulled by reindeer). In one embodiment, the one or more media content items may include two or more videos (e.g., video displaying a sleigh being pulled by reindeer and a star). In various embodiments,system 102 may obtain any combination of these media content items (e.g., one or more images, and/or one or more videos, etc.). - In one embodiment, the media content items, whether images or videos, are media content items that an end-user might want to add to frames of media streams during a multi-user video conference. For example, images to be added may include a mustache, eye glasses, hats, antlers, etc. Videos to be added may include a sleigh being pulled by reindeer, a star twinkling in the sky, snow falling from the sky, etc.
- In various embodiments,
system 102 may obtain the one or more media content items from a memory or library ofsystem 102, from the Internet, or any other location.System 102 may also obtain the image from a third-party application or an end-user (e.g., end-users U2, U4, U6, and U8). Images may be stored as image files, and videos may be stored as video files. In one embodiment, an end-user may provide the one or more media content items tosystem 102 using a third-party application or using an application provided bysystem 102. - In
block 208,system 102 obtains one or more parameters from a remote user. In one embodiment, the remote user is a user who is remote fromsystem 102. For example, in one embodiment, the remote user may be a third-party developer. In one embodiment, the third-party developer may use a third-party application or an application provided bysystem 102 to transmit the one or more parameters tosystem 102. In another embodiment, the remote user may be an end-user (e.g., end-users U2, U4, U6, and U8). In one embodiment, the end-user may use a third-party application or an application provided bysystem 102 to transmit the one or more parameters tosystem 102. - In various embodiments,
system 102 enables the remote user to specify with one or more parameters the location where the image is to be added to the frame. For example, the remote user may specify that a given image such as antlers is to be added to specific coordinates in the frame (e.g., coordinates at the top of a head shown in the frame). Parameters may also include offset parameters. For example, a remote user may specify that an image such as a mustache be added at a predetermined number of pixels or at a predetermined distance above the lips in a frame. For ease of illustration, other parameters are described in detail below. - In
block 210,system 102 adds the one or more content items to the one or more frames based on the coordinates and the one or more parameters. In various embodiments,system 102 performs the adding of the one or more content items in real-time during a multi-user video conference. - Although the steps, operations, or computations may be presented in a specific order, the order may be changed in particular embodiments. For example, in one embodiment,
system 102 may obtain the image and one or more parameters before obtaining the one or more frames and determining the coordinates. In another embodiment,system 102 may obtain the one or more parameters and one or more frames, and determine coordinates before obtaining the image. In another embodiment,system 102 may obtain the one or more parameters before obtaining the image. Other orderings of the steps are possible, depending on the particular implementation. In some particular embodiments, multiple steps shown as sequential in this specification may be performed at the same time. -
FIG. 4 illustrates theexample GUI 300 ofFIG. 3 , according to one embodiment.FIG. 4 showsGUI 300, andvideo windows FIG. 4 also showsmain video window 316, which displays a video stream of user U6, who is currently speakingFIG. 4 also showscontrol window 320, which includescontrol buttons 330. - As shown in
FIG. 4 ,system 102 has added an image ofantlers video windows main video window 316. In one embodiment,system 102 performs the adding of the image whensystem 102 receives a command from an end-user to add the image. While many example embodiments described herein are described in the context of images, embodiments described herein also apply to video content. For example,FIG. 4 also shows asleigh 410 being pulled acrossmain video window 316 andcorresponding video window 306. In one embodiment,sleigh 410 is video content thatsystem 102 adds to a video stream. Further embodiments involving video content are described in more detail below. - Furthermore, while many example media content items are described herein in the context of images and video content, media content items may also include audio content. For example, the method embodiments described in connection to
FIG. 2 may also include steps enabling an end-user in a multi-user video conference to also add audio content to the media stream. For example, in one embodiment,system 102 may obtain one or more audio files, and add audio content from the one or more audio files to the media stream based on the one or more parameters. Other embodiments for adding audio content to media streams are described in detail below in connection withFIG. 5 . - In this particular example,
system 102 has received the command from end-user U2, who desired to add antlers to the heads of end-users U4, U6, and U8. In various embodiments, end-user U2 may provide the command tosystem 102 by usingcontrol buttons 330 incontrol window 320. For example, one of thecontrol buttons 330 may have a label that reads “antlers.” As such, as end-user U2 selects or clicks that controlbutton 330,system 102 adds antlers to the frames associated with the other end-users U4, U6, and U8. In one embodiment, end-user U2 may remove the image by again selecting or clicking (e.g., toggling) thesame control button 330. - In one embodiment,
different control buttons 330 may be associated with different images that may be added to frames of the video streams. In one embodiment, one of thecontrol buttons 330 may be selected to display a drop-down menu (or any suitable selection mechanism), and the drop-down menu may display additional control buttons for various images that may be added to the frames of video streams. - As indicated above, embodiments enable an end-user, via
system 102, to modify frames of different types of streams (e.g., video streams, audio streams, etc.) and streams being transmitted in different directions (e.g., outbound streams and/or inbound streams). - In one embodiment, each end-user can use
control buttons 330 to specify the device where a particular image is to be added to the one or more frames. For example, an end-user may specify that the image (e.g., antlers) is to be added at one or more receiving client devices. As such,system 102 causes the client device of each end-user to add the image. Alternatively, the end-user may specify that the image (e.g., antlers) is to be added at the sending client device. As such,system 102 causes the sending client device to add the image, where the sending device broadcasts the video streams with the image already added to the frames of the broadcasted video stream. The end-user may make these selections using theappropriate control buttons 330. - In one embodiment, if a
control button 330 for adding an image is selected,system 102 may display a drop-down menu listing all end-users currently participating in the multi-user video conference. The end-user who selected the control button may select which client devices add the image by selecting thecontrol buttons 330 associated with the corresponding end-users. - In another embodiment, the end-user who selected the
control button 330 may also select which end-users can see the added image. For example, an end-user (e.g., end-user U2) may select acontrol button 330 so that all end-users (e.g., end-users U2, U4, U6, and U8) see the image (e.g.,antlers control button 330 so that selected end-users (e.g., end-users U2 and U4) see the image. In one embodiment, if acontrol button 330 for adding an image is selected,system 102 may display a drop-down menu listing all end-users currently participating in the multi-user video conference. The end-user who selected the control button may then select which end-users can see the image. In the example shown inFIG. 4 ,system 102 showsantlers respective video windows main video window 316, but not invideo window 302. - In one embodiment,
system 102 may preload media content (e.g., images, video content, audio content) and parameters prior to adding the media content during a multi-user video conference. For example,system 102 may preload media content items and parameters before a given multi-user video conference begins. This minimizes latency so that when the multi-user video conference begins,system 102 is ready to add media content items in real-time as soon assystem 102 receives commands to add the media content items. - In one embodiment, if a particular media content item (e.g., an image) is not preloaded,
system 102 may search the Internet or any suitable source of the media content item in real-time after receiving a command to add a particular media content item. The time to find the media content item may vary (e.g., 10 ms, 1 second, or more), depending on the network speed. In one embodiment, after finding the media content item,system 102 may immediately add the media content item to the frames or may first prompt the end-user for an approval or command to add the media content item. In one embodiment, acontrol button 330 may already have an identifier associated with a media content item to be obtained from any suitable source (e.g.,server 102, third-party application, or end-user). In another embodiment, acontrol button 330 when selected may cause a field to be displayed, where the end-user may type in an identifier (e.g., “antlers”). - While embodiments above are described in the context of a single frame, implementations of such embodiments extend to multiple frames, such as a series of frames in a video stream. In one embodiment,
server 102 tracks changes in frames in order to adjust images that were added to those frames.Server 102 performs these adjustments in real-time. For example, if the base image (e.g., head) of the frame moves up or down from frame to frame, the added image (e.g., antlers) moves up or down, accordingly. Also, if the base image (e.g., head) of the frame moves left or right from frame to frame, the added image (e.g., antlers) moves left or right, accordingly. - In one embodiment,
system 102 may enable the remote user to specify various parameters that may affect such adjustments. For example, the remote user may specify thatsystem 102 make such adjustments of the added image as quickly as the next frame changes. Such quick adjustments may require more system resources but would minimize jitter in the video stream. - In one embodiment,
system 102 scales an added media content item (e.g., added image) based on the frame. For example, if a given user moves closer to the user's camera, the user's face would become bigger relative to the size of the video window.System 102 scales the added media content item accordingly. For example, with an added image, if the size of the base image (e.g., head or face of the user) increases by a particular percentage (e.g., 5%), the size of the added image (e.g., antlers) also increases by the same percentage (e.g., 5%). Conversely, if the size of the base image decreases by a particular percentage, the size of the added image also decreases by the same percentage. - In one embodiment,
system 102 rotates an added image based on the frame. For example, if a given user rotates relative to the user's camera, the user's face would also rotate relative to the video window.System 102 rotates the added image accordingly. In other words, if the orientation of the base image (e.g., head or face of the user) rotates by a particular number of degrees (e.g., 30 degrees to the left or right), the added image (e.g., antlers) also rotates by the same number of degrees in the same direction (e.g., 30 degrees to the left or right). - In one embodiment,
system 102 may crop an added media content item (e.g., added image) based on the frame. For example, ifsystem 102 shows antlers on the head of another end-user being relatively close to the top of the video window, the antlers may not fit in the video window. As such,system 102 would crop the top of the antlers accordingly. Similarly, if video content such assleigh 410 moves off the screen,system 102 may crop the video assleigh 410 moves off the screen. - In other example embodiments, the remote user may not want the media content item to be cropped and may want to the media content item to remain within the frame. For example, in a game application where an image is a score board that is added to the frame, the score board should not be cropped. In one embodiment,
system 102 may keep the image in one the same place on the screen (e.g., upper right-hand corner, bottom center, etc.). In one embodiment,system 102 may adjust the position of the image based on the frame, but keep the image in the frame. For example, if the image is a text bubble,system 102 may adjust the position of the text bubble to follow the face of the associated end-user. If the end-user were to stand up, causing the face of the end-user to move up relative to the video window, the text bubble would also move up but would stop at the top edge of the video window. - In one embodiment,
system 102 may adjust the video graphics array (VGA) of the added image based on the VGA of the frame. In one embodiment,system 102 would scale the VGA of the added image up or down in order to match the VGA of the frame. For example, if the antler image in the example above had a VGA of 640×480, and the frame had aVGA 320×180,system 102 may scale the VGA of the antler image to 320×180 so that they match. - In one embodiment,
system 102 may add an image to one or more frames by overlaying the image on the one or more frames. In another embodiment,system 102 may add an image to one or more frames by replacing at least a portion of the one or more frames with the image. - In one embodiment,
system 102 may add an image to one or more frames that include an avatar. Such an embodiment may be implemented if the end-user does not have a video camera available.System 102 may transmit an avatar in lieu of a video of the actual face of an end-user.System 102 may overlay a portion of the frame or replace a portion of the frame (e.g., the mouth portion of the avatar) with an image of a mouth, where the image of the mouth moves and synchronizes with the voice of the end-user. In other words, the mouth lip syncs with the speaking voice of the end-user. In one embodiment, the mouth may open by different amounts based on the audio level in the audio stream. - In one embodiment, a given user may elect to use an avatar even if the given user has a video camera available. In various embodiments,
system 102 or any third-party application may provide one or more avatars for the user to select. In one embodiment, if there are multiple avatars,system 102 may provide the user with a menu of avatars to select. - Note that the images of end-users U2, U4, U6, and U8 shown in
respective video windows FIGS. 3 and 4 are conceptual and represent realistic images of end-users. Any one of these images may also represent an avatar. - As indicated above, the media content items added to the video stream may also include video content. For example, in one embodiment,
system 102 may obtain one or more video files, and may add video content from the one or more video files to the media stream based on one or more parameters. In various embodiments,system 102 may obtain the one or more video files from a memory or library ofsystem 102, from the Internet, or any other location.System 102 may also obtain the one or more video files from a third-party application or an end-user (e.g., end-users U2, U4, U6, and U8). - Referring again to
FIG. 4 , in an example implementation, the video content may includesleigh 410 being pulled by reindeer, and the remote user may set parameters associated with the video content. As indicated above, the remote user may be a third-party developer or an end-user. In one embodiment, the remote user may set parameters such thatsleigh 410 is to be pulled across a video window when an end-user selects abutton 330 in a multi-user video conference. In one embodiment, after selecting theappropriate button 330,system 102 may allow the end-user to select which video windows are to display the video content (e.g., via a pull-down menu or any suitable selection mechanism). For example, inFIG. 4 , the end-user may choose to displaysleigh 410 inmain video window 316 and the video window (e.g., video window 306) corresponding to the end-user (e.g., end-user U6) being displayed inmain video window 316. In another example, the end-user may choose to displaysleigh 410 in any video window associated with a particular end-user (e.g.,main video window 316 andvideo window 306 associated with end-user U6). - In one embodiment,
system 102 may add video content to the media stream by overlaying the video content into a video stream. In another embodiment,system 102 may add video content to the media stream by replacing at least a portion of a video stream with the video content. - In one embodiment,
system 102 may enable an end-user to specify which other end-users see the video content. For example, an end-user (e.g., end-user U2) may select acontrol button 330 so that all end-users (e.g., end-users U2, U4, U6, and U8) see the video content (e.g., sleigh 410). Alternatively, the end-user (e.g., end-user U2) may select acontrol button 330 so that selected end-users (e.g., end-users U2 and U4) see the video content. In one embodiment, if acontrol button 330 for adding video content is selected,system 102 may display a drop-down menu (or any suitable selection mechanism) listing all end-users currently participating in the multi-user video conference. The end-user who selected the control button may then select which end-users can see the video content. - In a given embodiment the system may allow the end-user to add any combination of image, audio, or video content to the media stream.
- In one embodiment,
system 102 may also obtain one or more audio files, and may add audio content from the one or more audio files to the media stream based on the one or more parameters. -
FIG. 5 illustrates an example simplified flow diagram for increasing end-user engagement in a social network, according to one embodiment. Referring to bothFIGS. 1 and 5 , the method is initiated inblock 502, wheresystem 102 obtains one or more audio files. In one embodiment, the one or more audio files are user-specified audio files that include user-specified audio content. In various embodiments,system 102 may obtain the one or more audio files from a memory or library ofsystem 102, from the Internet, or any other location.System 102 may also obtain the one or more audio files from a third-party application or an end-user (e.g., end-users U2, U4, U6, and U8). In one embodiment, an end-user may provide the one or more audio files tosystem 102 using a third-party application or using an application provided bysystem 102. In an example implementation, audio content may include the sound of a chime, and the remote user may set parameters such that the chime sounds when an end-user enters a multi-user video conference. - In
block 504,system 102 obtains one or more parameters from a remote user. In one embodiment, the remote user is a third-party developer. In one embodiment, the third-party developer may use a third-party application or an application provided bysystem 102 to transmit the one or more parameters tosystem 102. In one embodiment the remote user is an end-user. In one embodiment, the end-user may use a third-party application or an application provided bysystem 102 to transmit the one or more parameters tosystem 102. - In one embodiment, the one or more parameters may include one or more sound control parameters, trigger parameters, and special effects parameters. For example, the sound parameters may modify sound characteristics of the audio content. Such sound parameters may include, for example, pitch, loudness, and quality such as timbre. Trigger parameters may modify when and often a particular audio content item is played. For example, trigger parameters may enable looping of audio content. Special effects parameters may modify a sound such as the voice of an end-user during a multi-user video conference. For example, special effects parameters may change the voice of an end-user (e.g., lower, raise, speed up, slow down a users voice, etc.). These embodiments are some examples of how
system 102 enables remote users such as third-party developers and end-users to control the one or more parameters, which in turn control how and when audio content is played. - In
block 506,system 102 may add audio content from the at least one audio file to a media stream based on the one or more parameters. As indicated above, in one embodiment,system 102 adds the audio content in real-time during a multi-user video conference. In one embodiment,system 102 may add audio content from two or more different audio files to a media stream based on the one or more parameters. For example, the two or more different audio files may include chimes, ringing sleigh bells, music, etc. As indicated above, the media stream may include one or more outbound streams and/or one or more inbound streams. In various embodiments, the media stream may be a media stream in a multi-user video conference. - In one embodiment,
system 102 may add audio content to the media stream by mixing the audio content into an audio stream. In another embodiment,system 102 may add audio content to the media stream by replacing at least a portion of an audio stream with the audio content. - Although the steps, operations, or computations may be presented in a specific order, the order may be changed in particular embodiments. Other orderings of the steps are possible, depending on the particular implementation. In some particular embodiments, multiple steps shown as sequential in this specification may be performed at the same time.
- In one embodiment,
system 102 may enable an end-user to specify which other end-users hear the audio content. In one embodiment,system 102 may enable an end-user to select acontrol button 330 that enables all end-users to hear the audio content. For example, an end-user (e.g., end-user U2) may select acontrol button 330 so that all end-users (e.g., end-users U2, U4, U6, and U8) hear the audio content. In one embodiment,system 102 may enable an end-user to select acontrol button 330 that enables particular end-users to hear the audio content. For example, the end-user (e.g., end-user U2) may select acontrol button 330 so that selected end-users (e.g., end-users U2 and U4) hear the audio content. In one embodiment, if acontrol button 330 for adding sound is selected,system 102 may display a drop-down menu (or any suitable selection mechanism) listing all end-users currently participating in the multi-user video conference. The end-user who selected the control button may then select which end-users can hear the audio content. - In one embodiment,
system 102 may enable an end-user to create one or more groups of end-users. These groups are user-specified such that an end-user can associate one or more other end-users with each group and give each group a label (e.g., a family group, a friends group, a co-workers group, etc.). As such, a given group may be a subset of all end-users. Also, a given end-user may be associated with one or more groups. In one embodiment,system 102 may enable an end-user to select acontrol button 330 that enables one or more of the groups to hear the audio content. Accordingly, in a given multi-user video conference, different groups of end-users may hear different audio content based on end-user selections. - In one embodiment,
system 102 may also obtain one or more media content items in addition to audio content. In one embodiment, the one or more media content items may include one or more images (e.g., antlers). In one embodiment, the one or more media content items may include two or more different images (e.g., antlers and a mustache). In one embodiment, the one or more media content items may include one or more videos (e.g., a video of a sled flying in the background). - In one embodiment,
system 102 may add the one or more media content items to the media stream based on the one or more parameters. In various embodiments,system 102 may add any one more different types of media content (e.g., an audio content item, an image content item, a video content item, or any combination thereof) to the same media stream. Also, in one embodiment,system 102 may add the any number of media content items of any of these media content types to the same media stream. For example, in one embodiment, the one or more parameters may specify that particular audio content be played in combination with one or more particular actions. In one embodiment, such actions may include an end-user selecting aparticular control button 330. For example, if an end-user selects acontrol button 330 to add a media content item (e.g., mustache, eye glasses, hat, etc.),system 102 may cause a user-specified (e.g., chime, ring, etc.) to sound when the media content is added. For example, a bell may ring when a mustache is added to the face of an end-user, etc. - In one embodiment, the one or more parameters may specify that particular audio content be played in combination with one or more particular actions associated with an image or a video content. Such actions may include an image changing (e.g., a number incrementing). For example, a bell may ring when a score on a scoreboard reaches a certain number, etc. Such actions may include particular movements in a video. For example, bells may ring whenever a particular end-user moves their head, etc.
- As indicated above, developers such as third-party developers and end-users may provide different types of media content (e.g., images, video content, and audio content) may provide associated parameters to add such media content to media streams. In one embodiment, a third-party developer may write applications such as games for use in a multi-user video conference. Such games may be used by end-users to add images such as icons, score boards, locations of users, and other informational images to frames of video streams or to play humorous sound effects through audio streams of the multi-user video conference. In another example application,
system 102 may enable end-users in a multi-user video conference to throw objects such as pies at each other, trying to hit each other's faces before they duck out of the way. In one embodiment, a successful hit may result in a “splat” sound. - In another example,
system 102 may operate with applications having seasonal themes. For example,system 102 may enable end-users to choose to have reindeer antlers/nose during the month of December. In another example,system 102 may enable end-users to choose to have a mustache added during the month of November. In yet another example,system 102 may enable end-users to enable a Santa's beard, mix audio of holiday music, and have reindeer pull a sleigh across the stream. - Embodiments described herein provide various benefits. For example, enabling end-users in a multi-user video conferencing system to alter the video and/or audio streams increases engagement of the end-users in the video conference. Embodiments described herein also increase overall engagement among end-users in a social networking environment.
- While
system 102 is described as performing the steps as described in the embodiments herein, any suitable component or combination of components ofsystem 102 or any suitable processor or processors associated withsystem 102 may perform the steps described. -
FIG. 6 is a block diagram of anexample server device 600, which may be used to implement the embodiments described herein. For example,server device 600 may be used to implementserver device 104 ofFIG. 1 , as well as to perform the method embodiments described herein. In one embodiment,server device 600 includes aprocessor 602, anoperating system 604, amemory 606, and input/output (I/O)interface 608.Server device 600 also includes asocial network engine 610 and amedia application 612, which may be stored inmemory 606 or on any other suitable storage location or computer-readable medium.Media application 612 provides instructions that enableprocessor 602 to perform the functions described herein and other functions. - For ease of illustration,
FIG. 6 shows one block for each ofprocessor 602,operating system 604,memory 606,social network engine 610,media application 612, and I/O interface 608. Theseblocks server device 600 may not have all of the components shown and/or may have other elements including other types of elements instead of, or in addition to, those shown herein. - Although the description has been described with respect to particular embodiments thereof, these particular embodiments are merely illustrative, and not restrictive. Concepts illustrated in the examples may be applied to other examples and embodiments.
- Note that the functional blocks, methods, devices, and systems described in the present disclosure may be integrated or divided into different combinations of systems, devices, and functional blocks as would be known to those skilled in the art.
- Any suitable programming language and programming techniques may be used to implement the routines of particular embodiments. Different programming techniques may be employed such as procedural or object-oriented. The routines may execute on a single processing device or multiple processors. Although the steps, operations, or computations may be presented in a specific order, the order may be changed in different particular embodiments. In some particular embodiments, multiple steps shown as sequential in this specification may be performed at the same time.
- A “processor” includes any suitable hardware and/or software system, mechanism or component that processes data, signals or other information. A processor may include a system with a general-purpose central processing unit, multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a geographic location, or have temporal limitations. For example, a processor may perform its functions in “real-time,” “offline,” in a “batch mode,” etc. Portions of processing may be performed at different times and at different locations, by different (or the same) processing systems. A computer may be any processor in communication with a memory. The memory may be any suitable processor-readable storage medium, such as random-access memory (RAM), read-only memory (ROM), magnetic or optical disk, or other tangible media suitable for storing instructions for execution by the processor.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/324,897 US9088697B2 (en) | 2011-12-13 | 2011-12-13 | Processing media streams during a multi-user video conference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/324,897 US9088697B2 (en) | 2011-12-13 | 2011-12-13 | Processing media streams during a multi-user video conference |
Publications (2)
Publication Number | Publication Date |
---|---|
US20130147905A1 true US20130147905A1 (en) | 2013-06-13 |
US9088697B2 US9088697B2 (en) | 2015-07-21 |
Family
ID=48571616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/324,897 Active 2033-08-21 US9088697B2 (en) | 2011-12-13 | 2011-12-13 | Processing media streams during a multi-user video conference |
Country Status (1)
Country | Link |
---|---|
US (1) | US9088697B2 (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130242031A1 (en) * | 2012-03-14 | 2013-09-19 | Frank Petterson | Modifying an appearance of a participant during a video conference |
US20160191958A1 (en) * | 2014-12-26 | 2016-06-30 | Krush Technologies, Llc | Systems and methods of providing contextual features for digital communication |
GB2535445A (en) * | 2015-01-20 | 2016-08-24 | Starleaf Ltd | Audio - visual conferencing systems |
US20170068448A1 (en) * | 2014-02-27 | 2017-03-09 | Keyless Systems Ltd. | Improved data entry systems |
US20180203601A1 (en) * | 2017-01-19 | 2018-07-19 | Microsoft Technology Licensing, Llc | Simultaneous authentication system for multi-user collaboration |
US20180227482A1 (en) * | 2017-02-07 | 2018-08-09 | Fyusion, Inc. | Scene-aware selection of filters and effects for visual digital media content |
US20180300100A1 (en) * | 2017-04-17 | 2018-10-18 | Facebook, Inc. | Audio effects based on social networking data |
US10887422B2 (en) | 2017-06-02 | 2021-01-05 | Facebook, Inc. | Selectively enabling users to access media effects associated with events |
US10972701B1 (en) * | 2018-04-11 | 2021-04-06 | Securus Technologies, Llc | One-way video conferencing |
US11195314B2 (en) | 2015-07-15 | 2021-12-07 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
US11202017B2 (en) | 2016-10-06 | 2021-12-14 | Fyusion, Inc. | Live style transfer on a mobile device |
US11272140B2 (en) * | 2019-12-04 | 2022-03-08 | Meta Platforms, Inc. | Dynamic shared experience recommendations |
US11435869B2 (en) | 2015-07-15 | 2022-09-06 | Fyusion, Inc. | Virtual reality environment based manipulation of multi-layered multi-view interactive digital media representations |
US11488380B2 (en) | 2018-04-26 | 2022-11-01 | Fyusion, Inc. | Method and apparatus for 3-D auto tagging |
US11632533B2 (en) | 2015-07-15 | 2023-04-18 | Fyusion, Inc. | System and method for generating combined embedded multi-view interactive digital media representations |
US11636637B2 (en) | 2015-07-15 | 2023-04-25 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
US11776229B2 (en) | 2017-06-26 | 2023-10-03 | Fyusion, Inc. | Modification of multi-view interactive digital media representation |
US11783864B2 (en) | 2015-09-22 | 2023-10-10 | Fyusion, Inc. | Integration of audio into a multi-view interactive digital media representation |
US11876948B2 (en) | 2017-05-22 | 2024-01-16 | Fyusion, Inc. | Snapshots at predefined intervals or angles |
US11956412B2 (en) | 2020-03-09 | 2024-04-09 | Fyusion, Inc. | Drone based capture of multi-view interactive digital media |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10187684B2 (en) * | 2015-06-23 | 2019-01-22 | Facebook, Inc. | Streaming media presentation system |
US11012389B2 (en) | 2018-05-07 | 2021-05-18 | Apple Inc. | Modifying images with supplemental content for messaging |
US10681310B2 (en) | 2018-05-07 | 2020-06-09 | Apple Inc. | Modifying video streams with supplemental content for video conferencing |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020110224A1 (en) * | 2001-02-13 | 2002-08-15 | International Business Machines Corporation | Selectable audio and mixed background sound for voice messaging system |
US6683938B1 (en) * | 2001-08-30 | 2004-01-27 | At&T Corp. | Method and system for transmitting background audio during a telephone call |
US20060107303A1 (en) * | 2004-11-15 | 2006-05-18 | Avaya Technology Corp. | Content specification for media streams |
US20070200925A1 (en) * | 2006-02-07 | 2007-08-30 | Lg Electronics Inc. | Video conference system and method in a communication network |
US20070223668A1 (en) * | 2006-02-10 | 2007-09-27 | Phonebites, Inc. | Inserting content into a connection using an intermediary |
US20110090301A1 (en) * | 2009-10-21 | 2011-04-21 | Aaron Jeffrey A | Method and apparatus for providing a collaborative workplace |
US20110150200A1 (en) * | 2009-12-23 | 2011-06-23 | Sun Microsystems, Inc. | Web guided collaborative audio |
US20120169835A1 (en) * | 2011-01-05 | 2012-07-05 | Thomas Woo | Multi-party audio/video conference systems and methods supporting heterogeneous endpoints and progressive personalization |
US20120323579A1 (en) * | 2011-06-17 | 2012-12-20 | At&T Intellectual Property I, L.P. | Dynamic access to external media content based on speaker content |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5736982A (en) | 1994-08-03 | 1998-04-07 | Nippon Telegraph And Telephone Corporation | Virtual space apparatus with avatars and speech |
US5995119A (en) | 1997-06-06 | 1999-11-30 | At&T Corp. | Method for generating photo-realistic animated characters |
US6496594B1 (en) | 1998-10-22 | 2002-12-17 | Francine J. Prokoski | Method and apparatus for aligning and comparing images of the face and body from different imagers |
US7139767B1 (en) | 1999-03-05 | 2006-11-21 | Canon Kabushiki Kaisha | Image processing apparatus and database |
US6778224B2 (en) | 2001-06-25 | 2004-08-17 | Koninklijke Philips Electronics N.V. | Adaptive overlay element placement in video |
US7227976B1 (en) | 2002-07-08 | 2007-06-05 | Videomining Corporation | Method and system for real-time facial image enhancement |
US7969447B2 (en) | 2004-05-06 | 2011-06-28 | Pixar | Dynamic wrinkle mapping |
DE102005014772A1 (en) | 2005-03-31 | 2006-10-05 | Siemens Ag | Display method for showing the image of communication participant in communication terminal, involves using face animation algorithm to process determined facial coordinates of image to form animated image of calling subscriber |
US20090100484A1 (en) | 2007-10-10 | 2009-04-16 | Mobinex, Inc. | System and method for generating output multimedia stream from a plurality of user partially- or fully-animated multimedia streams |
US20090151741A1 (en) | 2007-12-14 | 2009-06-18 | Diem Ngo | A cosmetic template and method for creating a cosmetic template |
US20130044180A1 (en) | 2011-08-16 | 2013-02-21 | Sony Corporation | Stereoscopic teleconferencing techniques |
-
2011
- 2011-12-13 US US13/324,897 patent/US9088697B2/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020110224A1 (en) * | 2001-02-13 | 2002-08-15 | International Business Machines Corporation | Selectable audio and mixed background sound for voice messaging system |
US6683938B1 (en) * | 2001-08-30 | 2004-01-27 | At&T Corp. | Method and system for transmitting background audio during a telephone call |
US20060107303A1 (en) * | 2004-11-15 | 2006-05-18 | Avaya Technology Corp. | Content specification for media streams |
US20070200925A1 (en) * | 2006-02-07 | 2007-08-30 | Lg Electronics Inc. | Video conference system and method in a communication network |
US20070223668A1 (en) * | 2006-02-10 | 2007-09-27 | Phonebites, Inc. | Inserting content into a connection using an intermediary |
US20110090301A1 (en) * | 2009-10-21 | 2011-04-21 | Aaron Jeffrey A | Method and apparatus for providing a collaborative workplace |
US20110150200A1 (en) * | 2009-12-23 | 2011-06-23 | Sun Microsystems, Inc. | Web guided collaborative audio |
US20120169835A1 (en) * | 2011-01-05 | 2012-07-05 | Thomas Woo | Multi-party audio/video conference systems and methods supporting heterogeneous endpoints and progressive personalization |
US20120323579A1 (en) * | 2011-06-17 | 2012-12-20 | At&T Intellectual Property I, L.P. | Dynamic access to external media content based on speaker content |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130242031A1 (en) * | 2012-03-14 | 2013-09-19 | Frank Petterson | Modifying an appearance of a participant during a video conference |
US9060095B2 (en) * | 2012-03-14 | 2015-06-16 | Google Inc. | Modifying an appearance of a participant during a video conference |
US10866720B2 (en) * | 2014-02-27 | 2020-12-15 | Keyless Systems Ltd. | Data entry systems |
US20170068448A1 (en) * | 2014-02-27 | 2017-03-09 | Keyless Systems Ltd. | Improved data entry systems |
US20160191958A1 (en) * | 2014-12-26 | 2016-06-30 | Krush Technologies, Llc | Systems and methods of providing contextual features for digital communication |
GB2535445B (en) * | 2015-01-20 | 2021-02-10 | Starleaf Ltd | Audio - visual conferencing systems |
GB2535445A (en) * | 2015-01-20 | 2016-08-24 | Starleaf Ltd | Audio - visual conferencing systems |
US11776199B2 (en) | 2015-07-15 | 2023-10-03 | Fyusion, Inc. | Virtual reality environment based manipulation of multi-layered multi-view interactive digital media representations |
US11636637B2 (en) | 2015-07-15 | 2023-04-25 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
US11435869B2 (en) | 2015-07-15 | 2022-09-06 | Fyusion, Inc. | Virtual reality environment based manipulation of multi-layered multi-view interactive digital media representations |
US11632533B2 (en) | 2015-07-15 | 2023-04-18 | Fyusion, Inc. | System and method for generating combined embedded multi-view interactive digital media representations |
US11195314B2 (en) | 2015-07-15 | 2021-12-07 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
US11783864B2 (en) | 2015-09-22 | 2023-10-10 | Fyusion, Inc. | Integration of audio into a multi-view interactive digital media representation |
US11202017B2 (en) | 2016-10-06 | 2021-12-14 | Fyusion, Inc. | Live style transfer on a mobile device |
US20180203601A1 (en) * | 2017-01-19 | 2018-07-19 | Microsoft Technology Licensing, Llc | Simultaneous authentication system for multi-user collaboration |
US10739993B2 (en) * | 2017-01-19 | 2020-08-11 | Microsoft Technology Licensing, Llc | Simultaneous authentication system for multi-user collaboration |
US20180227482A1 (en) * | 2017-02-07 | 2018-08-09 | Fyusion, Inc. | Scene-aware selection of filters and effects for visual digital media content |
WO2018194571A1 (en) * | 2017-04-17 | 2018-10-25 | Facebook, Inc. | Audio effects based on social networking data |
US20180300100A1 (en) * | 2017-04-17 | 2018-10-18 | Facebook, Inc. | Audio effects based on social networking data |
US11876948B2 (en) | 2017-05-22 | 2024-01-16 | Fyusion, Inc. | Snapshots at predefined intervals or angles |
US10887422B2 (en) | 2017-06-02 | 2021-01-05 | Facebook, Inc. | Selectively enabling users to access media effects associated with events |
US11776229B2 (en) | 2017-06-26 | 2023-10-03 | Fyusion, Inc. | Modification of multi-view interactive digital media representation |
US10972701B1 (en) * | 2018-04-11 | 2021-04-06 | Securus Technologies, Llc | One-way video conferencing |
US11488380B2 (en) | 2018-04-26 | 2022-11-01 | Fyusion, Inc. | Method and apparatus for 3-D auto tagging |
US11272140B2 (en) * | 2019-12-04 | 2022-03-08 | Meta Platforms, Inc. | Dynamic shared experience recommendations |
US11956412B2 (en) | 2020-03-09 | 2024-04-09 | Fyusion, Inc. | Drone based capture of multi-view interactive digital media |
Also Published As
Publication number | Publication date |
---|---|
US9088697B2 (en) | 2015-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9088697B2 (en) | Processing media streams during a multi-user video conference | |
US9088426B2 (en) | Processing media streams during a multi-user video conference | |
US11863336B2 (en) | Dynamic virtual environment | |
US8963916B2 (en) | Coherent presentation of multiple reality and interaction models | |
WO2022066229A1 (en) | Virtual conference view for video calling | |
US20130249947A1 (en) | Communication using augmented reality | |
US10045086B1 (en) | Interactive system for virtual cinema and method | |
US20130222371A1 (en) | Enhancing a sensory perception in a field of view of a real-time source within a display screen through augmented reality | |
US20130249948A1 (en) | Providing interactive travel content at a display device | |
US20130232430A1 (en) | Interactive user interface | |
US11752433B2 (en) | Online gaming platform voice communication system | |
JP2017056193A (en) | Remote rendering server comprising broadcaster | |
US11456887B1 (en) | Virtual meeting facilitator | |
US20130238778A1 (en) | Self-architecting/self-adaptive model | |
US20230017111A1 (en) | Spatialized audio chat in a virtual metaverse | |
US20230335121A1 (en) | Real-time video conference chat filtering using machine learning models | |
US11334310B2 (en) | Synchronization of digital content consumption | |
US11651541B2 (en) | Integrated input/output (I/O) for a three-dimensional (3D) environment | |
US20210099735A1 (en) | Computer Program, Server Device, Terminal Device and Method | |
JP7329209B1 (en) | Information processing system, information processing method and computer program | |
US10326809B2 (en) | Interactive system for virtual cinema and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GOOGLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VIVEKANANDAN, JANAHAN;PETTERSON, FRANK;CARPENTER, THOR;AND OTHERS;SIGNING DATES FROM 20111213 TO 20111219;REEL/FRAME:027416/0656 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: GOOGLE LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044142/0357 Effective date: 20170929 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |