US20120262538A1 - System And Method For Video Conferencing - Google Patents

System And Method For Video Conferencing Download PDF

Info

Publication number
US20120262538A1
US20120262538A1 US13/471,288 US201213471288A US2012262538A1 US 20120262538 A1 US20120262538 A1 US 20120262538A1 US 201213471288 A US201213471288 A US 201213471288A US 2012262538 A1 US2012262538 A1 US 2012262538A1
Authority
US
United States
Prior art keywords
stream
speaker
video
current speaker
endpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/471,288
Other versions
US8619118B2 (en
Inventor
Duanpei Wu
Shantanu Sarkar
Nermin M. Ismail
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cisco Technology Inc
Original Assignee
Cisco Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology Inc filed Critical Cisco Technology Inc
Priority to US13/471,288 priority Critical patent/US8619118B2/en
Publication of US20120262538A1 publication Critical patent/US20120262538A1/en
Priority to US14/098,059 priority patent/US9137486B2/en
Application granted granted Critical
Publication of US8619118B2 publication Critical patent/US8619118B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Definitions

  • the present disclosure relates generally to the field of communications.
  • a centralized multipoint control unit is traditionally used to support video conferencing.
  • a conference server receives media streams from the endpoints, mixes the streams, and sends individual streams back to the endpoints.
  • the mixing may include composition (for example), creating a two-by-two composition of four video streams.
  • Each of these sub-streams can be locked to a particular user or voice-switched where appropriate.
  • Other possible compositions can be one-by-one, one-by-two, three-by-three, etc. It is critical that timing and synchronization be precise in such video-conferencing scenarios. Additionally, bandwidth considerations should be recognized and appreciated in attempting to accommodate optimal video conferences. Accordingly, the ability to provide an effective mechanism to properly direct communications for an end user/endpoint, or to offer an appropriate protocol that optimizes bandwidth characteristics and parameters provides a significant challenge to network operators, component manufacturers, and system designers.
  • FIG. 1 illustrates an example of a distributed video conferencing system that supports both voice activated (VA) and continuous presence (CP) streams;
  • VA voice activated
  • CP continuous presence
  • FIG. 2 illustrates the example video conferencing system of FIG. 1 in which the current speaker is a voice activated endpoint
  • FIG. 3 illustrates the example video conferencing system of FIG. 1 in which the current speaker is a continuous presence endpoint
  • FIG. 4 illustrates an example VA steam map table that a stream controller may use to control the voice activated multitask group in the example video conferencing system of FIG. 3 ;
  • FIG. 5 illustrates an example CP steam map table that a stream controller may use to control the continuous presence multitask group in the example video conferencing system of FIG. 3 ;
  • FIG. 6 is an example of a video conferencing system implementing transcoding and/or transrating, in which the current speaker is a voice activated endpoint;
  • FIG. 7 is an example of a video conferencing system implementing transcoding and/or transrating, in which the current speaker is a continuous presence voice activated endpoint;
  • FIG. 8 illustrates an example method for video conferencing
  • FIG. 9 illustrates an example method of generating stream map tables
  • FIG. 10 illustrates an example method for communicating video streams at a media switch in support of a video conference.
  • an apparatus includes two modules.
  • a first module receives a request from a first endpoint to subscribe to a voice activated multicast group and causes the first endpoint to receive a current speaker's video stream if the first endpoint is not the current speaker and to receive a last speaker's video stream if the first endpoint is the current speaker.
  • a second module receives a request from a second endpoint to subscribe to a continuous presence multicast group and causes the second endpoint to receive a continuous presence, current speaker video stream if the second endpoint is not the current speaker and to receive a continuous presence, last speaker video stream if the second endpoint is the current speaker.
  • the continuous presence, current speaker video stream includes a composition or two or more video streams, one of which includes at least a portion of the current speaker's video stream.
  • the continuous presence, last speaker video stream includes a composition or two or more video streams, one of which includes at least a portion of a last speaker's video stream.
  • a system in another embodiment, includes one or more VA end points, one or more VP end points, and a stream controller.
  • the VA endpoints are subscribed to a voice activated multicast group, and the CP end points are subscribed to a continuous presence multicast group.
  • the stream controller instructs a media switch to multicast a current speaker's stream to each VA endpoint that is not the current speaker.
  • the stream controller also instructs the media switch to multicast a continuous presence, current speaker stream to each CP endpoint that is not the current speaker.
  • the continuous presence, current speaker video stream includes a composition or two or more video streams, one of which includes at least a portion of the current speaker's video stream.
  • FIG. 1 illustrates an example of a distributed video conferencing system 10 that supports both voice activated (VA) and continuous presence (CP) multicast streams.
  • System 10 includes endpoints 12 a , 12 b , 12 c , and 12 d (generally, endpoints 12 ); media switches 14 a , 14 b , and 14 c (generally, media switches 14 ); video bridge 16 ; and stream controller 18 .
  • Distributed video conferencing system 10 supports both a voice activated multicast group 20 and a continuous presence multicast group 22 .
  • Endpoints 12 represent clients that participate in a video conferencing session in communication system 10 .
  • Endpoints 12 may include devices that end users or other devices may use to initiate or participate in a communication, such as a computer, a personal digital assistant (PDA), a laptop, an electronic notebook, a telephone, a mobile station, an audio IP phone, a video phone appliance, a personal computer (PC) based video phone, a streaming client, or any other device, component, element, or object capable of engaging in voice, video, or data exchanges within communication system 10 .
  • Endpoints 12 may include a suitable interface to a human user, such as a microphone, a display, a keyboard, a whiteboard, a video-conferencing interface, or other terminal equipment.
  • Endpoints 12 may also be any device that seeks to initiate or participate in a communication on behalf of another entity or element, such as a program, a database, an application, a piece of software, or any other component, device, element, or object capable of initiating a voice, a video, or a data exchange within communication system 10 .
  • Data refers to any type of numeric, voice and audio, video, audio-visual, or script data, or any type of source or object code, or any other suitable information in any appropriate format that may be communicated from one point to another.
  • Each media switch 14 can perform a number of functions. Each media switch 14 may register its capabilities at startup, which may include any of the following media processing functions: 1) audio mixing that mixes audio of loudest speakers, distributes loudest speaker information to other media switches 14 ; 2) audio transcoding that provides audio transcoding (codec translation) services that can be used by other network devices without necessary resources (e.g., DSPs) to perform audio transcoding on their own; 3) video composition that processes video by creating a composite view (i.e.
  • Each media switch 14 may include any suitable combination of hardware, software, algorithms, processors, devices, components, objects, application specific integrated circuits (ASICs), or elements operable to facilitate the video-conferencing capabilities and operations described in this document.
  • ASICs application specific integrated circuits
  • a video or video stream may or may not also involve audio information.
  • Video bridge 16 may perform any of the above media processing functions described above with reference to media switches 14 .
  • video bridge 16 may receive two or more video streams and generate video stream that present a composition view of the received video streams.
  • the resulting composition video stream allows a user to view simultaneously at least a portion of the video streams that make up the composition.
  • video bridge generates the continuous presence streams, which is a composite of two or more streams.
  • Video bridge 16 may be provided as a service of one or more of media switches 14 .
  • video bridge 16 may be an element external to and in communication with media switches 14 .
  • video bridge 16 may be internal to media switches 14 or even replace one or more media switches 14 .
  • endpoint 12 may contain video bridging functionality.
  • Video bridge 16 may be combined with other networking equipment.
  • video bridge 16 may be provided in a router, a gateway, a switch, a loadbalancer, or in any other suitable location operable to facilitate their operations.
  • Video bridge 16 may be equipped with an audio mixer and/or video mixer.
  • video bridge 16 may include suitable software to provide the capabilities of distributed video conferencing or to execute the operations of communication system 10 as described herein.
  • these functionalities may be provided within a given network element (as described above) or performed by suitable hardware, algorithms, processors, devices, ASICs, components, objects, or elements. Note that any combination of these elements may also be used in given applications of video conferencing within communication system 10 .
  • Stream controller 18 provides instructions to endpoints 12 , media switches 14 , and video bridge 16 to control communication of the video streams (including multicasting and unicasting).
  • Stream controller 18 may be any other suitable combination of hardware, software, algorithms, processors, devices, components, objects, application specific integrated circuits (ASICs), or elements operable to facilitate any of the video-conferencing control functions.
  • Steam controller 18 may be a separate external module (as illustrated in FIG. 1 ), or it may be functionality built into or associated with one or more other modules, such as endpoints 12 , media switches 14 , video bridge 16 , routers, gateways, switches, loadbalancers, or any other suitable communication or processing equipment.
  • Distributed video conferencing system 10 supports both a voice activated multicast group 20 and a continuous presence multicast group 22 .
  • Each endpoint 12 may subscribe to a voice activated multicast group 20 or continuous presence multicast group 22 and thus receive the video stream associated with that particular multicast group.
  • endpoints 12 may multicast their video streams.
  • voice activated multicast group 20 and continuous presence multicast group 22 may be source specific multicast (SSM) groups.
  • SSM source specific multicast
  • media switches 14 or video bridge 16 may act as an intermediary between the participants' endpoint 12 and the rest of system 10 .
  • Voice activated multicast group 20 is associated with the voice activated stream, which carries the video of the current speaker at any given time. However, if the current speaker is subscribed to the voice activated multicast group 20 , the current speaker typically will receive the last speaker video stream as opposed to the current speaker video stream.
  • Continuous presence multicast group 22 is associated with a continuous presence stream, which is a stream composed from several streams, one of which is typically the voice activated stream (i.e., the video of the current active speaker at any given time). Again, as with the voice activated multicast group 20 , if the current speaker is subscribed to the continuous presence multicast group 22 , the current speaker typically will see the last speaker (as opposed to the current speaker) as part of his or her continuous presence stream.
  • system 10 may accommodate endpoints 12 that support different video characteristics in terms of codec types, frame rates, bit rates, etc.
  • system 10 may be able to transcode and transrate video streams so that the same stream (voice activated stream or continuous presence stream) may be sent to several endpoints 12 that support different video codecs.
  • voice activated multicast groups 20 may be assigned to accommodate other various video characteristics
  • continuous presence multicast groups 22 may be assigned to accommodate other various video characteristics
  • Video conference participants who are interested in receiving the voice activated streams or continuous presence streams, subscribe to the appropriate one of voice activated multicast group 20 or continuous presence multicast group 22 .
  • Endpoints 12 that subscribe to voice activated multicast group 20 are called voice activated clients (VA clients).
  • VA clients voice activated clients
  • endpoints 12 a and 12 d are VA client.
  • Endpoints 12 that subscribe to continuous presence multicast group 22 are called continuous presence clients (CP clients).
  • CP clients continuous presence clients
  • endpoints 12 b and 12 c are CP client.
  • the voice activated streams or continuous presence streams may include a current speaker (CS) stream and/or a last speaker (LS) stream.
  • the current speaker stream candidates are one or more streams from endpoints 12 with the loudest audio.
  • stream controller 18 may select the stream that has the loudest audio and thus qualify as the current speaker stream.
  • a current speaker stream becomes the last speaker stream when one of the other streams from endpoints 12 are selected to the be the current speaker stream.
  • the current speaker stream becomes the last speaker stream when at least one of the other streams from endpoints 12 has a higher audio volume.
  • the current speaker stream is typically multicast so that any endpoint 12 requiring the stream can receive it, and the last speaker stream is typically unicast to endpoint 12 associated with the current speaker.
  • distributed video conference system 10 supports a mixed mode of both voice activated streams and continuous presence streams.
  • endpoint 12 associated with the current speaker receives the last speaker video stream from endpoint 12 associated with the last speaker
  • other endpoints 12 receive the current speaker video stream from endpoint 12 associated with the current speaker.
  • stream controller 18 sends a signal to the media switch 14 that hosts endpoint 12 associated with the current speaker to instruct the media switch 14 to multicast its endpoint video stream to one or more endpoints 12 subscribed to voice activated multicast groups 20 .
  • stream controller 18 may send a signal to the media switch 14 that hosts endpoint 12 associated with the last speak to instruct the media switch 14 to unicast its video stream to endpoint 12 associated with the current speaker, via the hosting media switch 14 .
  • Last speaker endpoint 12 via its media switch 14 may unicast its video stream directly to current active speaker endpoint 12 via its media switch 14 .
  • stream controller 18 may send the signal directly to endpoint 12 that has the required capabilities to participate the distributed video conference, such as multicast.
  • last speaker endpoint 12 may communicate its video stream through a transcoder or transrater to current speaker endpoint 12 .
  • system 10 may perform bandwidth sharing between the active speaker multicast stream and the last speaker unicast stream.
  • video bridge 16 For CP clients, video bridge 16 generates two CP streams. One of the CP streams has the current speaker video stream as one of its composed videos (CP CS ). The other CP video stream has the last speaker video stream as one of its composed videos (CP LS ). Endpoint 12 associated with the current speaker receives the CP LS video stream that has the last speaker video stream as one of its composed videos, and other endpoints 12 receive the CP CS video stream that has the current speaker video stream as one of its composed videos. To generate these two CP video streams, video bridge 16 subscribes to voice activated multicast group 20 , so that video bridge 16 receives the current speaker video stream.
  • stream controller 18 may send a signal to endpoint 12 (in one particular embodiment, via media switch 14 ) associated with the last speaker to instruct endpoint 12 to unicast its video stream to video bridge 16 so that video bridge 16 can use the last speaker stream to generate the CP LS for communication to endpoint 12 b or 12 c associated with the current speaker.
  • the last speaker endpoint 12 may unicast its video stream directly to video bridge 16 .
  • last speaker endpoint 12 may communicate its video stream through a transcoder or transrater to video bridge 16 .
  • system 10 may perform bandwidth sharing between the active speaker multicast stream and the last speaker unicast stream received by the voice activated participant or video bridge 16 .
  • System 10 may also share bandwidth between the CP CS multicast stream and the unicast CP LS stream.
  • FIG. 2 illustrates video conferencing system 10 in which the current speaker is endpoint 12 a , which is a VA client.
  • endpoints 12 a and 12 d joined the video conference as a voice activated clients and subscribed to voice activated multicast group 20
  • endpoints 12 b and 12 c joined the video conference as continuous presence clients and subscribed to continuous presence multicast group 22 .
  • Endpoint 12 a has been designated the current speaker (CS)
  • endpoint 12 c has been designated the last speaker (LS).
  • video bridge 16 also joined the video conference as a voice activated clients and subscribed to voice activated multicast group 20 .
  • Endpoint 12 a the current speaker, multicasts its current speaker stream 50 to the other VA clients, which include endpoint 12 d and video bridge 16 .
  • endpoint 12 a communicates its current speaker stream 50 to media switch 14 b , which communicates stream 50 via voice activated multicast group 20 to media switches 14 a and 14 c .
  • Media switch 14 a communicates current speaker stream 50 to video bridge 16
  • media switch 14 c communicates current speaker stream 50 to endpoint 12 d.
  • Video bridge 16 receives current speaker stream 50 , generates continuous presence, current speaker (CP CS ) stream 52 , and multicasts continuous presence, current speaker (CP CS ) stream 52 to the CP clients, which include endpoints 12 b and 12 c . Because video bridge 16 is a voice activated client, video bridge 16 receives current speaker stream 50 when that stream is multicast to the voice activated multicast group 20 . Video bridge 16 uses current speaker stream 50 to generate continuous presence, current speaker (CP CS ) stream 52 . Continuous presence, current speaker (CP CS ) stream 52 allows participants to view several streams simultaneously, one of which is the current speaker.
  • the other steams may be fixed streams, streams from other endpoints 12 , streams from a video presentation, a slideshow, streams from a computer, or streams from any other suitable visual representation.
  • video bridge 16 communicates continuous presence, current speaker (CP CS ) stream 52 to media switch 14 a , which communicates stream 52 via continuous presence multicast group 22 to media switches 14 b and 14 c .
  • Media switch 14 b communicates continuous presence, current speaker (CP CS ) stream 52 to endpoint 12 b
  • media switch 14 c communicates continuous presence, current speaker (CP CS ) stream 52 to endpoint 12 c.
  • Endpoint 12 c which is designated as the last speaker in this example, unicasts its last speaker (LS) stream 54 to endpoint 12 a , the VA client designated as the current speaker in this example.
  • Endpoint 12 a is a VA client and thus would typically receive current speaker stream 50 from voice activated multicast group 20 . Because endpoint 12 a is the current speaker, current speaker stream 50 would present the participant at endpoint 12 a with a video of himself or herself. Rather than present the participant at endpoint 12 a with a video of himself or herself, endpoint 12 a receives the stream 54 of the last speaker.
  • Endpoint 12 c communicates last speaker stream 54 to media switch 14 c , which communicates it to media switch 14 b . Media switch 14 b communicates last speaker stream 54 to endpoint 12 a . Because the current speaker is a VA client rather than a CP client, video bridge 16 does not receive the unicast of last speaker stream 54 from the last speaker, endpoint 12 c.
  • Stream controller 18 communicates instructions to media switches 14 to control the processing and/or communication of streams 50 , 52 , and 54 as described above.
  • stream controller 18 may communicate instructions to endpoints 12 and video bridge 16 regarding the processing and/or communication of streams 50 , 52 , and 54 .
  • Stream controller 18 also may communicate instructions to control the processing or communication of the other streams that video bridge 16 combines with current speaker stream 50 to generate continuous presence, current speaker (CP CS ) stream 52 .
  • CP CS current speaker
  • FIG. 3 illustrates an example video conferencing system in which the current speaker is a CP client.
  • endpoints 12 a and 12 d joined the video conference as a VA clients and subscribed to voice activated multicast group 20
  • endpoints 12 b and 12 c joined the video conference as CP clients and subscribed to continuous presence multicast group 22
  • Endpoint 12 a has been designated the last speaker (LS)
  • endpoint 12 c has been designated the current speaker (CS).
  • video bridge 16 also joined the video conference as a voice activated clients and subscribed to voice activated multicast group 20 .
  • Endpoint 12 c the current speaker, multicasts its current speaker stream 60 to the VA clients, which include endpoints 12 a and 12 d and video bridge 16 .
  • Endpoint 12 c communicates its current speaker stream 60 to media switch 14 c , which communicates stream 60 via voice activated multicast group 20 to media switches 14 a and 14 c .
  • Media switch 14 c also communicates stream 60 to voice activated client, endpoint 14 d .
  • Media switch 14 a communicates current speaker stream 60 to video bridge 16
  • media switch 14 b communicates current speaker stream 60 to endpoint 12 a.
  • Video bridge 16 receives current speaker stream 60 , generates continuous presence, current speaker (CP CS ) stream 62 , multicasts continuous presence, current speaker (CP CS ) stream 62 to the CP clients, in this case, endpoint 12 b . Because video bridge 16 is a VA client, video bridge 16 receives current speaker stream 60 when that stream is multicast to the voice activated multicast group 20 . Video bridge 16 uses current speaker stream 60 to generate continuous presence, current speaker (CP CS ) stream 62 . Continuous presence, current speaker (CP CS ) stream 62 allows participants to view several streams simultaneously, one of which is the current speaker.
  • the other steams may be fixed streams, streams from other endpoints 12 , streams from a video presentation, a slideshow, streams from a computer, or streams from any other suitable visual representation.
  • Video bridge 16 communicates continuous presence, current speaker (CP CS ) stream 62 to media switch 14 a , which communicates stream 62 via continuous presence multicast group 22 to media switches 14 b .
  • Media switch 14 b communicates continuous presence, current speaker (CP CS ) stream 62 to endpoint 12 b.
  • Endpoint 12 a which is designated as the last speaker in this example, unicasts its last speaker (LS) stream 64 to video bridge 16 .
  • the current speaker is associated which endpoint 12 c , a CP client.
  • Endpoint 12 c as a CP client, would typically receive continuous presence, current speaker (CP CS ) stream 62 from continuous presence multicast group 22 . Because endpoint 12 c is the current speaker, continuous presence, current speaker (CP CS ) stream 62 would present the participant at endpoint 12 c with a video of himself or herself.
  • endpoint 12 c receives continuous presence, last speaker (CP LS ) stream 66 , which is the continuous presence stream with the last speaker video instead of the current speaker video.
  • Endpoint 12 a communicates last speaker stream 64 to media switch 14 b , which communicates it to media switch 14 a .
  • Media switch 14 a communicates last speaker stream 64 to video bridge 16 . Because the current speaker, endpoint 12 b , is a CP client rather than a VA client, last speaker stream 64 is not unicast to the current speaker.
  • Video bridge 16 uses last speaker stream 64 to generate continuous presence, last speaker (CP LS ) stream 66 .
  • Continuous presence, last speaker (CP LS ) stream 66 is like continuous presence, current speaker (CP CS ) stream 62 but includes last speaker stream 64 in place of current speaker stream 60 .
  • Continuous presence, last speaker (CP LS ) stream 66 allows a CP client, who is the current speaker, to view several streams simultaneously, one of which is the last speaker.
  • the other steams in continuous presence, last speaker (CP LS ) stream 66 may be fixed streams, streams from other endpoints 12 , streams from a video presentation, a slideshow, streams from a computer, or streams from any other suitable visual representation.
  • video bridge 16 communicates continuous presence, last speaker (CP LS ) stream 66 to media switch 14 a , which communicates stream 66 to media switches 14 c .
  • Media switch 14 c communicates continuous presence, last speaker (CP LS ) stream 66 to endpoint 12 c.
  • Stream controller 18 communicates instructions to media switches 14 to control the processing and/or communication of streams 60 , 62 , 64 , and 66 as described above.
  • stream controller 18 may communicate instructions to endpoints 12 and video bridge 16 regarding the processing and/or communication of streams 60 , 62 , 64 , and 66 .
  • Stream controller 18 also may communicate instructions to control the processing or communication of the other streams that video bridge 16 combines with current speaker stream 60 and last speaker stream 64 to generate continuous presence, current speaker (CP CS ) stream 62 and continuous presence, last speaker (CP LS ) stream 66 .
  • FIG. 4 illustrates an example VA stream map table 70 that stream controller 18 may use to control voice activated multicast group 20 in the example video conferencing system 10 of FIG. 3 .
  • Table 70 identifies the video streams by their stream IDs in first column 72 and the media switch ID in second column 74
  • First row 76 of table 70 identifies the current speaker stream 60 that is multicast to the VA clients, such as endpoints 12 that subscribed to voice activated multicast group 20 .
  • the current speaker is endpoint 12 c .
  • First row 76 identifies stream 60 of endpoint 12 c as “EP 3-1 in,” which stands for the input stream of endpoint 3 - 1 .
  • First row 76 also identifies media switch 14 c (MS 3 ), as the media switch that receives stream 60 .
  • Table 70 does not specify a target for the current speaker's video stream because, as mentioned above, current speaker stream 60 is multicast to the VA clients.
  • Second row 78 identifies last speaker stream 64 .
  • the last speaker is endpoint 12 a .
  • Second row identifies stream 64 of endpoint 12 a as “EP 2-1 in,” which stands for the input stream of endpoint 2 - 1 .
  • Second row 78 also identifies media switch 14 b (MS 2 ), as the media switch that receives last speaker stream 64 .
  • Third row 79 identifies the target of last speaker stream 64 identified in second row 78 .
  • the target of the last speaker stream 64 is video bridge 16 (because the current speaker is a CP client).
  • Third row 79 identifies the target as “VB-LS,” which is the name of the last speaker stream communicated from media switch 14 a to video bridge 16 .
  • Third row 79 also identifies the media switch to which the last speaker's video stream should be communicated, which is media switch 14 a (MS 1 ).
  • FIG. 5 illustrates example CP stream map table 80 that stream controller 18 may use to control the continuous presence multitask group 22 in the example video conferencing system 10 of FIG. 3 .
  • Table 80 identifies the video streams by their stream IDs in first column 82 and media switch ID in second column 84
  • First row 86 of table 80 identifies continuous presence, current speaker (CP CS ) stream 62 that is multicast to the CP clients (e.g., endpoints 12 subscribed to continuous presence multitask group 20 ).
  • video bridge 16 generates and communicates current speaker continuous presence (CP CS ) stream 62 to media switch 14 .
  • First row 86 identifies this stream as “CP CS ,”
  • First row 76 also identifies media switch 14 a (MS 1 ), as the media switch that receives this stream.
  • Table 70 does not specify a target for this stream because, as mentioned above, continuous presence, current speaker (CP CS ) stream 62 is multicast to the CP clients.
  • Second row 88 identifies continuous presence, last speaker (CP LS ) stream 66 .
  • video bridge 16 generates and communicates continuous presence, last speaker (CP LS ) stream 66 to media switch 14 ( a ).
  • Second row 88 identifies this stream 66 as “CP LS .”
  • Second row 88 also identifies media switch 14 a (MS 1 ), as the media switch that receives this stream 66 .
  • Third row 89 identifies the target of continuous presence, last speaker (CP LS ) stream 66 in second row 88 .
  • continuous presence, last speaker (CP LS ) stream 66 is communicated to endpoint 12 c .
  • Third row 79 identifies the target stream as “EP 3-1 out,” which stands for the output stream to endpoint 3 - 1 .
  • Third row 89 also identifies media switch 14 c (MS 3 ) as the media switch 14 to which continuous presence, last speaker (CP LS ) stream 66 should be communicated.
  • FIG. 6 is an example of a video conferencing system 100 implementing transcoding and/or transrating, in which the current speaker is a VA client. Similar to FIGS. 1-3 , system 100 in FIG. 6 includes endpoints 12 a , 12 b , 12 c , 12 d , 12 e , 12 f , 12 g , and 12 h (generally, endpoints 12 ); media switches 14 a , 14 b , 14 c , 14 d , 14 e , 14 f , and 14 g (generally, media switches 14 ); video bridges 16 a and 16 b (generally, video bridges 16 ); stream controller 118 , voice activated multicast groups 120 a and 120 b (generally, voice activated multicast group 120 ), and continuous presence multicast groups 122 a and 122 b (generally, continuous presence multicast group 122 ).
  • voice activated multicast groups 120 a and 120 b multicast current speaker streams 150 a and 150 b
  • continuous presence multicast groups 122 a and 122 b multicast continuous presence, current speaker (CP CS ) streams 152 a and 152 b.
  • a first portion 102 of system 100 operates using one codec (in this example, H263), and a second portion 104 of system 100 operates using a different codec (in this example, H264).
  • the transcoding and/or transrating is implemented by a pair of virtual clients 106 and 108 .
  • Virtual client 106 is subscribed to voice activated multicast group 120 a
  • virtual client 108 is subscribed to voice activated multicast group 120 b.
  • Endpoints 112 a , 112 d , 112 f , and 112 g joined the video conference as VA clients.
  • Endpoints 112 a and 112 d subscribed to voice activated multicast group (H263) 120 a
  • endpoints 112 f and 112 g subscribed to voice activated multicast group (H264) 120 b .
  • endpoint 112 a has been designated the current speaker (CS).
  • Endpoints 112 b , 112 c , 112 e , and 112 h joined the video conference as CP clients.
  • Endpoints 112 b and 112 c subscribed to continuous presence multicast group (H263) 122 a
  • endpoints 112 e and 112 h subscribed to continuous presence multicast group (H264) 122 b .
  • endpoint 112 h has been designated the last speaker (LS).
  • Video bridges 116 joined the video conference as VA clients.
  • Video bridge 116 a subscribed to voice activated multicast group (H263) 120 a
  • video bridge 116 b subscribed to voice activated multicast group (H264) 120 b.
  • Endpoint 112 a the current speaker, multicasts its current speaker stream 150 a to the VA clients in portion 102 of system 100 , which includes endpoint 112 d , video bridge 116 a , and virtual client 106 .
  • Endpoint 112 a communicates its current speaker stream 150 a to media switch 114 b , which communicates stream 150 a via voice activated multicast group 120 a to media switches 114 a , 114 b , and 114 c .
  • Media switch 114 a communicates current speaker stream 150 a to video bridge 116 a
  • media switch 114 c communicates current speaker stream 150 a to endpoint 12 d
  • media switch 114 d communicate current speaker stream 150 a to virtual client 106 .
  • Virtual clients 106 and 108 transcode or transrate current speaker stream 150 a from the coding protocol supported by portion 102 of system 100 to the coding stream protocol supported by portion 104 of system 100 .
  • virtual client 106 and 108 generate current speaker stream 150 b , which correspond to current speaker stream 150 a .
  • current speaker stream 150 a is H263
  • current speaker stream 150 b is H264.
  • Virtual client 108 is the logical current speaker in portion 104 of system 100 .
  • virtual client 108 multicasts its current speaker stream 150 b to the VA clients in portion 104 of system 100 , which include endpoints 112 f and 112 g , and video bridge 116 b .
  • Virtual client 108 communicates current speaker stream 150 b to media switch 114 d , which communicates stream 150 b via voice activated multicast group 120 b to media switches 114 e , 114 f , and 114 g .
  • Media switch 114 e communicates current speaker stream 150 b to endpoint 112 f
  • media switch 114 f communicates current speaker stream 150 b to endpoint 112 g
  • media switch 114 g communicates current speaker stream 150 b to video bridge 116 b.
  • Video bridges 116 a and 116 b receive current speaker streams 150 a and 150 b , generate continuous presence, current speaker (CP CS ) streams 152 a and 152 b , and multicasts continuous presence, current speaker (CP CS ) streams 152 a and 152 b to the CP clients, which include endpoints 112 b , 112 c , 112 e , and 112 h . Because video bridge 116 a and 116 b are VA clients, video bridges 116 a and 116 b receive current speaker streams 150 a and 150 b when these streams are multicast to the voice activated multicast groups 120 a and 120 b .
  • Video bridges 116 a and 116 b use current speaker streams 150 a and 150 b to generate continuous presence, current speaker (CP CS ) streams 152 a and 152 b .
  • Continuous presence, current speaker (CP CS ) streams 152 a and 152 b allow participant to view several streams simultaneously, one of which is the current speaker.
  • the other steams may be fixed streams, streams from other endpoints 112 , streams from a video presentation, a slideshow, streams from a computer, or streams from any other suitable visual representation.
  • Video bridge 116 a communicates continuous presence, current speaker (CP CS ) stream 152 a to media switch 114 a , which communicates stream 152 a via continuous presence multicast group 122 a to media switches 114 b and 114 c .
  • Media switch 114 b communicates continuous presence, current speaker (CP CS ) stream 152 a to endpoint 112 b
  • media switch 114 c communicates continuous presence, current speaker (CP CS ) stream 152 a to endpoint 112 c.
  • Video bridge 116 b communicates continuous presence, current speaker (CP CS ) stream 152 b to media switch 114 g , which communicates stream 152 b via continuous presence multicast group 122 b to media switches 114 e and 114 f .
  • Media switch 114 e communicates continuous presence, current speaker (CP CS ) stream 152 b to endpoint 112 e
  • media switch 114 f communicates continuous presence, current speaker (CP CS ) stream 152 b to endpoint 112 h.
  • Endpoint 112 h which is designated as the last speaker in this example, unicasts its last speaker (LS) stream 154 b to virtual client 108 , which is the logical current speaker is portion 104 of system 100 .
  • virtual client 108 is a voice activated client, it would typically receive current speaker stream 150 b from voice activated multicast group 120 b .
  • virtual client 108 is the logical current speaker in portion 104 of system 100
  • virtual client 108 receives last speaker stream 154 b of the last speaker.
  • Endpoint 12 h communicates last speaker stream 154 b to media switch 114 f , which communicates it to media switch 14 d .
  • Media switch 14 d communicates last speaker stream 154 b to virtual client 108 .
  • Virtual clients 106 and 108 transcode or transrate last speaker stream 154 b from the coding or rating protocol supported by portion 104 of system 100 to the coding or rating protocol supported by portion 102 of system 100 .
  • virtual clients 106 and 108 generate last speaker stream 154 a , which correspond to last speaker stream 154 b .
  • last speaker stream 154 a is H263
  • last speaker stream 154 b is H264.
  • Virtual client 106 which is the logical last speaker in portion 102 of system 100 , unicasts last speaker stream 154 a to endpoint 112 a , which is the current speaker.
  • Endpoint 112 a is a VA client and thus would typically receive current speaker stream 150 a from voice activated multicast group 120 a . Because endpoint 112 a is the current speaker, current speaker stream 150 a would present the participant at endpoint 112 a with a video of himself or herself. Rather than present the participant at endpoint 112 a with a video of himself or herself, endpoint 112 a receives stream 154 a of the last speaker.
  • Virtual client 106 communicates last speaker stream 154 a to media switch 114 d , which communicates it to media switch 114 b .
  • Media switch 114 b communicates last speaker stream 154 a to endpoint 112 a.
  • Stream controller 118 communicates instructions to media switches 114 to control the processing and/or communication of streams 150 , 152 , and 154 as described above.
  • stream controller 118 may communicate instructions to endpoints 12 and video bridges 16 regarding the processing and/or communication of streams 150 , 152 , and 154 .
  • Stream controller 118 also may communicate instructions to control the processing or communication of the other streams that video bridge 116 combines with current speaker stream 150 to generate continuous presence, current speaker (CP CS ) stream 152 .
  • FIG. 7 is an example of video conferencing system 100 implementing transcoding and/or transrating, in which the current speaker is a CP client.
  • system 110 in FIG. 7 includes endpoints 12 a , 12 b , 12 c , 12 d , 12 e , 12 f , 12 g , and 12 h (generally, endpoints 12 ); media switches 14 a , 14 b , 14 c , 14 d , 14 e , 14 f , and 14 g (generally, media switches 14 ); video bridges 16 a and 16 b (generally, video bridges 16 ); stream controller 118 , voice activated multicast group 120 a and 120 b (generally, voice activated multicast group 120 ), and continuous presence multicast group 122 a and 122 b (generally, continuous presence multicast group 122 ).
  • Voice activated multicast groups 120 a and 120 b multicast current speaker streams 150 a and 150 b
  • continuous presence multicast groups 122 a and 122 b multicast continuous presence, current speaker (CP CS ) streams 152 a and 152 b.
  • a first portion 102 of system 100 operates using one codec (in this example, H263), and a second portion 104 of system 100 operates using a different codec (in this example, H264).
  • the transcoding and/or transrating is implemented by a pair of virtual clients 106 and 108 .
  • Virtual client 106 is subscribed to voice activated multicast group 120 a
  • virtual client 108 is subscribed to voice activated multicast group 120 b.
  • Endpoints 112 a , 112 d , 112 f , and 112 g joined the video conference as VA clients.
  • Endpoints 112 a and 112 d subscribed to voice activated multicast group (H263) 120 a
  • endpoints 112 f and 112 g subscribed to voice activated multicast group (H264) 120 b.
  • Endpoints 112 b , 112 c , 112 e , and 112 h joined the video conference as CP clients.
  • Endpoints 112 b and 112 c subscribed to continuous presence multicast group ( 11263 ) 122 a
  • endpoints 112 e and 112 h subscribed to continuous presence multicast group ( 11264 ) 122 b .
  • endpoint 112 b has been designated the current speaker (CS)
  • endpoint 112 h has been designated the last speaker (LS).
  • Video bridges 116 joined the video conference as a VA client.
  • Video bridge 116 a subscribed to voice activated multicast group 120 a
  • video bridge 116 b subscribed to voice activated multicast group 120 b.
  • Endpoint 112 b the current speaker, multicasts its current speaker stream 150 a to the VA clients in portion 102 of system 100 , which includes endpoints 112 a and 112 d , video bridge 116 a , and virtual client 106 .
  • endpoint 112 b communicates its current speaker stream 150 a to media switch 114 b , which communicates stream 150 a via voice activated multicast group 120 a to media switches 114 a , 114 c , and 114 d .
  • Media switch 114 b also communicates stream 150 a to endpoint 112 a .
  • Media switch 114 a communicates current speaker stream 150 a to video bridge 116 a
  • media switch 114 c communicates current speaker stream 150 a to endpoint 12 d
  • media switch 114 d communicate current speaker stream 150 a to virtual client 106 .
  • Virtual clients 106 and 108 transcode or transrate current speaker stream 150 a from the coding or rating protocol supported by portion 102 of system 100 to the coding or rating protocol supported by portion 104 of system 100 .
  • virtual client 106 and 108 generate current speaker stream 150 b , which corresponds to current speaker stream 150 a .
  • current speaker stream 150 a is H263
  • current speaker stream 150 b is H264.
  • Virtual client 108 is the logical current speaker in portion 104 of system 100 .
  • virtual client 108 multicasts its current speaker stream 150 b to the VA clients in portion 104 of system 100 , which include endpoints 112 f and 112 g , and video bridge 116 b .
  • virtual client 108 communicates current speaker stream 150 b to media switch 114 d , which communicates stream 150 b via voice activated multicast group 120 b to media switches 114 e , 114 f , and 114 g .
  • Media switch 114 e communicates current speaker stream 150 b to endpoint 112 f
  • media switch 114 f communicates current speaker stream 150 b to endpoint 12 g
  • media switch 114 g communicate current speaker stream 150 b to video bridge 116 b.
  • Video bridges 116 a and 116 b receive current speaker stream 150 a and 150 b , generate continuous presence, current speaker (CP CS ) streams 152 a and 152 b , and multicasts continuous presence, current speaker (CP CS ) stream 152 a and 152 b to the CP clients, which include endpoints 112 c , 112 e , and 112 h . Because video bridge 116 a and 116 b are VA clients, video bridges 116 a and 116 b receive current speaker streams 150 a and 150 b when streams 150 a and 150 b are multicast to the voice activated multicast groups 120 a and 120 b .
  • Video bridges 116 a and 116 b use current speaker streams 150 a and 150 b to generate continuous presence, current speaker (CP CS ) streams 152 a and 152 b .
  • Continuous presence, current speaker (CP CS ) streams 152 a and 152 b allow participant to view several streams simultaneously, one of which is the current speaker.
  • the other steams may be fixed streams, streams from other endpoints 112 , streams from a video presentation, a slideshow, streams from a computer, or streams from any other suitable visual representation.
  • Video bridge 116 a communicates continuous presence, current speaker (CP CS ) stream 152 a to media switch 14 a , which communicates stream 152 a via continuous presence multicast group 122 a to media switch 114 c .
  • Media switch 114 c communicates continuous presence, current speaker (CP CS ) stream 152 a to endpoint 112 c.
  • Video bridge 116 b communicates continuous presence, current speaker (CP CS ) stream 152 b to media switch 14 g , which communicates stream 152 b via continuous presence multicast group 122 b to media switches 114 e and 114 f .
  • Media switch 114 e communicates continuous presence, current speaker (CP CS ) stream 152 b to endpoint 112 e
  • media switch 114 f communicates continuous presence, current speaker (CP CS ) stream 152 b to endpoint 112 h.
  • Endpoint 112 h which is designated as the last speaker in this example, unicasts its last speaker (LS) stream 164 b to virtual client 108 , which is the logical current speaker in portion 104 of system 100 .
  • Virtual client 108 is a VA client, so stream controller 118 instructs media switch 114 f to communicate last speaker stream 164 b to virtual client 108 via media switch 114 d rather than communicate it to video bridge 116 b.
  • Virtual clients 106 and 108 transcode or transrate last speaker stream 164 b from the coding protocol supported by portion 104 of system 100 to the coding protocol supported by portion 102 of system 100 .
  • virtual clients 106 and 108 generate last speaker stream 164 a , which correspond to last speaker stream 164 b .
  • last speaker stream 164 a is H263
  • last speaker stream 164 b is H264.
  • Virtual client 106 the virtual last speaker in portion 102 at system 100 , unicasts last speaker stream 154 a to video bridge 116 a .
  • the current speaker is associated which endpoint 112 b , a CP client.
  • Endpoint 112 b as a CP client, would typically receive continuous presence, current speaker (CP CS ) stream 152 a from continuous presence multicast group 22 .
  • CP CS current speaker
  • endpoint 112 b is the current speaker, continuous presence, current speaker (CP CS ) stream 152 a would present the participant at endpoint 112 b with a video of himself or herself, Rather than present the participant at endpoint 112 b with a video of himself or herself, endpoint 112 b receives continuous presence, last speaker (CP LS ) stream 166 , which is the continuous presence stream with the last speaker video instead of the current speaker video.
  • Virtual client 106 communicates last speaker stream 164 a to media switch 114 d , which communicates it to media switch 114 a .
  • Media switch 114 a communicates last speaker stream 164 a to video bridge 116 a . Because the current speaker is a CP client rather than a VA client, last speaker stream 164 is not unicast to the current speaker, endpoint 112 b.
  • Video bridge 116 a uses last speaker stream 164 a to generate continuous presence, last speaker (CP LS ) stream 166 .
  • Continuous presence, last speaker (CP LS ) stream 166 is like continuous presence, current speaker (CP CS ) stream 152 a but includes last speaker stream 164 a in place of current speaker stream 150 a .
  • Continuous presence, last speaker (CP LS ) stream 166 allows a continuous presence client, who is the current speaker, to view several streams simultaneously, one of which is the last speaker.
  • last speaker (CP LS ) stream 166 may be fixed streams, streams from other endpoints 12 , streams from a video presentation, a slideshow, streams from a computer, or streams from any other suitable visual representation.
  • video bridge 116 a communicates continuous presence, last speaker (CP LS ) stream 166 to media switch 114 a , which communicates stream 166 to media switches 114 b .
  • Media switch 114 b communicates continuous presence, last speaker (CP LS ) stream 166 to endpoint 12 b.
  • Stream controller 118 communicates instructions to media switches 114 to control the processing and/or communication of streams 150 , 152 , 164 , and 166 as described above.
  • stream controller 118 may communicate instructions to endpoints 12 and video bridges 16 regarding the processing and/or communication of streams 150 , 152 , 164 , and 166 .
  • Stream controller 118 also may communicate instructions to control the processing or communication of the other streams that video bridge 116 combines with current speaker stream 150 and last speaker stream 164 to generate continuous presence, current speaker (CP CS ) stream 152 and continuous presence, last speaker (CP LS ) stream 166 .
  • FIG. 8 illustrates an example method for video conferencing.
  • the method begins at step 200 , where stream controller 18 receives requests from VA clients, including endpoints 12 a and 12 d and video bridge 16 , to subscribe to voice activated multicast group 20 .
  • stream controller 18 receives request from CP clients, including endpoints 12 b and 12 c , to subscribe to continuous presence multicast group 22 .
  • stream controller 18 determines whether the current speaker is a VA client. If the current speaker is a VA client, the method continues at step 230 , where stream controller 18 instructs the current speaker (endpoint 12 or media switch 14 associated with current speaker) to multicast the current speaker's stream 50 to VA clients (including video bridge 16 ), except the current speaker.
  • stream controller 18 instructs the last speaker (endpoint 12 or media switch 14 associated with last speaker) to communicate the last speaker's stream 54 to the current speaker (endpoint 12 or media switch 14 associated with current speaker).
  • Stream controller 18 instructs video bridge 16 to generate continuous presence, current speaker stream 52 at step 250 and to multicast stream 52 to CP clients, including endpoints 12 b and 12 c . The method returns to step 220 , to determine if the current speaker is still a VA client.
  • stream controller 18 instructs the current speaker (endpoint 12 or media switch 14 associated with current speaker) to multicast the current speaker's stream 60 to VA clients (including video bridge 16 ).
  • Stream controller 18 instructs video bridge 16 generates continuous presence, current speaker stream 62 at step 280 and to multicast stream 62 to CP clients, except the current speaker, at step 290 .
  • stream controller 18 instructs the last speaker (endpoint 12 or media switch 14 associated with last speaker) to communicate the last speaker's stream 64 to video bridge 16 .
  • Stream controller 18 instructs video bridge 16 to generate continuous presence, last speaker stream 66 at step 310 and to communicate stream 66 to current speaker at step 320 .
  • the method returns to step 220 , to determine if the current speaker is a VA client.
  • FIG. 9 illustrates an example method of generating VA and CP stream map tables 70 and 80 .
  • stream controller 18 may use this method or another method to generate stream map tables, such the example tables illustrated in FIGS. 4 and 5 .
  • step 400 stream controller 18 initializes VA stream map table 70 and CP stream map table 80 .
  • stream controller 18 creates VA stream map table 70 with 1st speaker entry 76 for identifying the current speaker stream, 2nd speaker entry 78 for identifying the last speaker stream, and 2nd speaker target entry 79 for identifying the intended recipient of the last speaker stream.
  • stream controller 18 creates CP stream map 80 with 1st speaker entry 86 identifying continuous presence, current speaker stream, 2nd speaker entry 87 identifying continuous presence, last speaker stream, and 2nd speaker target entry 88 for identifying the intended recipient of the continuous presence, last speaker stream.
  • 1st speaker entry 86 includes information, such as stream ID 82 and media switch ID 84 , identifying continuous presence, current speaker stream, and 2nd speaker entry 87 includes information, such as stream ID 82 and media switch ID 84 , identifying continuous presence, last speaker stream.
  • stream controller 18 receives requests from VA clients, such as endpoints 12 a and 12 d and video bridge 16 , to subscribe to voice activated multicast group 20 .
  • stream controller 18 receives request from CP clients, such as endpoints 12 b and 12 c , to subscribe to continuous presence multicast group 22 .
  • stream controller 18 receives information identifying a new current speaker.
  • an external device such as an audio mixer, provides this information.
  • stream controller 18 moves the information from the 1st speaker entry 76 to 2nd speaker entry 78 of VA stream map table 70 at step 440 . This change is made to indicate that the old current speaker is the new last speaker.
  • stream controller enters current speaker information into the 1st speaker entry 76 of VA stream map table 70 .
  • the current speaker information includes the stream ID 72 and media switch ID 74 associated with the current speaker.
  • stream controller 18 determines whether the new current speaker is a VA client. If the new current speaker is a VA client, the method continues at step 470 , where stream controller 18 makes the current speaker the target of the 2nd speaker stream of VA stream map table 70 at step 470 . In a particular embodiment, stream controller 18 identifies current speaker in 2nd speaker target 79 of VA stream map table 70 by entering the current speaker's input stream ID 72 and media switch ID 74 . At step 480 , stream controller 18 provides no 2nd speaker stream target 88 in CP stream map table 80 .
  • step 460 If the new current speaker is not a VA client at step 460 , then the new current speaker is a CP client, and the method continues at step 490 , where stream controller 18 makes video bridge 16 the 2nd speaker target 79 in VA stream map 70 .
  • step 500 stream controller 18 makes the current speaker the 2nd speaker target 88 of CP stream map 80 .
  • stream controller 18 communicates VA stream map table 70 and CP stream map table 80 to media switches 14 .
  • stream controller 18 communicates tables 70 and 80 to other network devices supporting the video conference.
  • stream controller 18 communicates only information that has changed in tables 70 and 80 . The method returns to step 430 to wait for information identifying a new current speaker.
  • FIG. 10 illustrates an example method for communicating video streams at media switch 14 in support of a video conference.
  • the method begins at step 600 , where media switch 14 receives instructions from stream controller 18 .
  • the instructions comprise VA stream map table 70 and CP stream map table 80 .
  • media switch 14 If media switch 14 is associated with the last speaker identified in the instructions at step 610 , media switch 14 unicasts the last speaker stream to the last speaker target identified in the instructions at step 620 . In a particular embodiment, media switch 14 determines whether it is associated with the last speaker by determining whether it is identified in the media switch ID 74 of 2nd speaker entry 78 of VA map stream table 70 . In a particular embodiment, media switch 14 unicasts the last speaker stream to the 2nd speaker target 79 identified in VA stream map table 70 .
  • media switch 14 multicasts the current speaker stream to the VA multicast group 20 at step 640 .
  • media switch 14 determines whether it is associated with the current speaker by determining whether it is identified in the media switch ID 74 of 1st speaker entry 76 of VA map stream table 70 . If the current speaker is a VA client at step 650 , media switch 14 communicates the last speaker stream to the current speaker endpoint 12 at step 660 . In a particular embodiment, media switch 14 receives the last speaker stream from another media switch 14 or other network device and communicates the stream to current speaker endpoint 12 .
  • the current speaker is not a VA client at step 650
  • the current speaker is a CP client
  • media switch 14 communicates continuous presence, last speaker stream to the current speaker endpoint 12 at step 670 .
  • media switch 14 receives the continuous presence, last speaker stream from video bridge 16 or other network device and communicates the stream to current speaker endpoint 12 .
  • media switch 14 If media switch 14 is supporting VA clients (other than the current speaker) at step 680 , media switch 14 communicates the current speaker stream to the other VA clients at step 690 .
  • media switch 14 receives the current speaker stream from another media switch 14 or other network device that multicast the current speaker stream to voice activated multicast group 20 , and media switch 14 communicates the current speaker stream to one or more endpoints 12 that subscribed to voice activated multicast group 20 .
  • media switch 14 If media switch 14 is supporting CP client (other than the current speaker) at step 700 , media switch 14 communicates the continuous presence, current speaker stream to the other CP clients at step 710 .
  • media switch 14 receives the continuous presence, current speaker stream from video bridge 16 or other network device that multicast the continuous presence, current speaker stream to continuous presence multicast group 22 , and media switch 14 communicates the continuous presence, current speaker stream to one or more endpoints 12 that subscribed to continuous presence multicast group 22 .
  • step 600 when the media switch 14 receives new instruction from stream controller 18 .

Abstract

In one embodiment, an apparatus includes a first module that causes the first endpoint to receive a current speaker's video stream if the first endpoint is not the current speaker and to receive a last speaker's video stream if the first endpoint is the current speaker. The apparatus includes a second module that causes the second endpoint to receive a continuous presence, current speaker video stream if the second endpoint is not the current speaker and to receive a continuous presence, last speaker video stream if the second endpoint is the current speaker. The continuous presence, current speaker video stream comprises two or more video streams, one of which includes at least a portion of the current speaker's video stream. The continuous presence, last speaker video stream comprises two or more video streams, one of which includes at least a portion of a last speaker's video stream.

Description

    TECHNICAL FIELD
  • The present disclosure relates generally to the field of communications.
  • BACKGROUND
  • A centralized multipoint control unit (MCU) is traditionally used to support video conferencing. A conference server receives media streams from the endpoints, mixes the streams, and sends individual streams back to the endpoints. The mixing may include composition (for example), creating a two-by-two composition of four video streams. Each of these sub-streams can be locked to a particular user or voice-switched where appropriate. Other possible compositions can be one-by-one, one-by-two, three-by-three, etc. It is critical that timing and synchronization be precise in such video-conferencing scenarios. Additionally, bandwidth considerations should be recognized and appreciated in attempting to accommodate optimal video conferences. Accordingly, the ability to provide an effective mechanism to properly direct communications for an end user/endpoint, or to offer an appropriate protocol that optimizes bandwidth characteristics and parameters provides a significant challenge to network operators, component manufacturers, and system designers.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an example of a distributed video conferencing system that supports both voice activated (VA) and continuous presence (CP) streams;
  • FIG. 2 illustrates the example video conferencing system of FIG. 1 in which the current speaker is a voice activated endpoint;
  • FIG. 3 illustrates the example video conferencing system of FIG. 1 in which the current speaker is a continuous presence endpoint;
  • FIG. 4 illustrates an example VA steam map table that a stream controller may use to control the voice activated multitask group in the example video conferencing system of FIG. 3;
  • FIG. 5 illustrates an example CP steam map table that a stream controller may use to control the continuous presence multitask group in the example video conferencing system of FIG. 3;
  • FIG. 6 is an example of a video conferencing system implementing transcoding and/or transrating, in which the current speaker is a voice activated endpoint;
  • FIG. 7 is an example of a video conferencing system implementing transcoding and/or transrating, in which the current speaker is a continuous presence voice activated endpoint;
  • FIG. 8 illustrates an example method for video conferencing;
  • FIG. 9 illustrates an example method of generating stream map tables;
  • FIG. 10 illustrates an example method for communicating video streams at a media switch in support of a video conference.
  • DESCRIPTION OF EXAMPLE EMBODIMENTS
  • Overview
  • In one embodiment, an apparatus includes two modules. A first module receives a request from a first endpoint to subscribe to a voice activated multicast group and causes the first endpoint to receive a current speaker's video stream if the first endpoint is not the current speaker and to receive a last speaker's video stream if the first endpoint is the current speaker. A second module receives a request from a second endpoint to subscribe to a continuous presence multicast group and causes the second endpoint to receive a continuous presence, current speaker video stream if the second endpoint is not the current speaker and to receive a continuous presence, last speaker video stream if the second endpoint is the current speaker. The continuous presence, current speaker video stream includes a composition or two or more video streams, one of which includes at least a portion of the current speaker's video stream. The continuous presence, last speaker video stream includes a composition or two or more video streams, one of which includes at least a portion of a last speaker's video stream.
  • In another embodiment, a system includes one or more VA end points, one or more VP end points, and a stream controller. The VA endpoints are subscribed to a voice activated multicast group, and the CP end points are subscribed to a continuous presence multicast group. The stream controller instructs a media switch to multicast a current speaker's stream to each VA endpoint that is not the current speaker. The stream controller also instructs the media switch to multicast a continuous presence, current speaker stream to each CP endpoint that is not the current speaker. The continuous presence, current speaker video stream includes a composition or two or more video streams, one of which includes at least a portion of the current speaker's video stream.
  • Description
  • FIG. 1 illustrates an example of a distributed video conferencing system 10 that supports both voice activated (VA) and continuous presence (CP) multicast streams. System 10 includes endpoints 12 a, 12 b, 12 c, and 12 d (generally, endpoints 12); media switches 14 a, 14 b, and 14 c (generally, media switches 14); video bridge 16; and stream controller 18. Distributed video conferencing system 10 supports both a voice activated multicast group 20 and a continuous presence multicast group 22.
  • Endpoints 12 represent clients that participate in a video conferencing session in communication system 10. Endpoints 12 may include devices that end users or other devices may use to initiate or participate in a communication, such as a computer, a personal digital assistant (PDA), a laptop, an electronic notebook, a telephone, a mobile station, an audio IP phone, a video phone appliance, a personal computer (PC) based video phone, a streaming client, or any other device, component, element, or object capable of engaging in voice, video, or data exchanges within communication system 10. Endpoints 12 may include a suitable interface to a human user, such as a microphone, a display, a keyboard, a whiteboard, a video-conferencing interface, or other terminal equipment. Endpoints 12 may also be any device that seeks to initiate or participate in a communication on behalf of another entity or element, such as a program, a database, an application, a piece of software, or any other component, device, element, or object capable of initiating a voice, a video, or a data exchange within communication system 10. Data, as used herein in this document, refers to any type of numeric, voice and audio, video, audio-visual, or script data, or any type of source or object code, or any other suitable information in any appropriate format that may be communicated from one point to another.
  • Media switches 14 assist in supporting the video conference. Each media switch 14 can perform a number of functions. Each media switch 14 may register its capabilities at startup, which may include any of the following media processing functions: 1) audio mixing that mixes audio of loudest speakers, distributes loudest speaker information to other media switches 14; 2) audio transcoding that provides audio transcoding (codec translation) services that can be used by other network devices without necessary resources (e.g., DSPs) to perform audio transcoding on their own; 3) video composition that processes video by creating a composite view (i.e. Hollywood Squares scenario) of a set of participants; 4) video transrating that provides video transrating (bandwidth reduction by changing video quantization parameters) service that can be used by other network devices without necessary resources (e.g., DSPs) to perform video transrating on their own; 5) video transcoding that provides video transcoding (codec translation) services that can be used by other network devices without necessary resources (e.g., DSPs) to perform video transcoding on their own; 6) media switching that represents the interface between the edge of the network (toward endpoints) and the core of the network (toward other media switches). Each media switch 14 may include any suitable combination of hardware, software, algorithms, processors, devices, components, objects, application specific integrated circuits (ASICs), or elements operable to facilitate the video-conferencing capabilities and operations described in this document. As used herein, a video or video stream may or may not also involve audio information.
  • Video bridge 16 may perform any of the above media processing functions described above with reference to media switches 14. In particular, video bridge 16 may receive two or more video streams and generate video stream that present a composition view of the received video streams. The resulting composition video stream allows a user to view simultaneously at least a portion of the video streams that make up the composition. As described below, video bridge generates the continuous presence streams, which is a composite of two or more streams.
  • Video bridge 16 may be provided as a service of one or more of media switches 14. At shown in the example illustrated in FIG. 1, video bridge 16 may be an element external to and in communication with media switches 14. Alternatively, video bridge 16 may be internal to media switches 14 or even replace one or more media switches 14. Also, in another alternative embodiment, endpoint 12 may contain video bridging functionality. Video bridge 16 may be combined with other networking equipment. For example, video bridge 16 may be provided in a router, a gateway, a switch, a loadbalancer, or in any other suitable location operable to facilitate their operations.
  • Video bridge 16 may be equipped with an audio mixer and/or video mixer. In a particular embodiment of the present invention, video bridge 16 may include suitable software to provide the capabilities of distributed video conferencing or to execute the operations of communication system 10 as described herein. In other embodiments, these functionalities may be provided within a given network element (as described above) or performed by suitable hardware, algorithms, processors, devices, ASICs, components, objects, or elements. Note that any combination of these elements may also be used in given applications of video conferencing within communication system 10.
  • Stream controller 18 provides instructions to endpoints 12, media switches 14, and video bridge 16 to control communication of the video streams (including multicasting and unicasting). Stream controller 18 may be any other suitable combination of hardware, software, algorithms, processors, devices, components, objects, application specific integrated circuits (ASICs), or elements operable to facilitate any of the video-conferencing control functions. Steam controller 18 may be a separate external module (as illustrated in FIG. 1), or it may be functionality built into or associated with one or more other modules, such as endpoints 12, media switches 14, video bridge 16, routers, gateways, switches, loadbalancers, or any other suitable communication or processing equipment.
  • Distributed video conferencing system 10 supports both a voice activated multicast group 20 and a continuous presence multicast group 22. Each endpoint 12 may subscribe to a voice activated multicast group 20 or continuous presence multicast group 22 and thus receive the video stream associated with that particular multicast group. Likewise, endpoints 12 may multicast their video streams. In a particular embodiment, voice activated multicast group 20 and continuous presence multicast group 22 may be source specific multicast (SSM) groups. For participants that do not have endpoints 12 that support multicasting, one of media switches 14 or video bridge 16 may act as an intermediary between the participants' endpoint 12 and the rest of system 10.
  • Voice activated multicast group 20 is associated with the voice activated stream, which carries the video of the current speaker at any given time. However, if the current speaker is subscribed to the voice activated multicast group 20, the current speaker typically will receive the last speaker video stream as opposed to the current speaker video stream.
  • Continuous presence multicast group 22 is associated with a continuous presence stream, which is a stream composed from several streams, one of which is typically the voice activated stream (i.e., the video of the current active speaker at any given time). Again, as with the voice activated multicast group 20, if the current speaker is subscribed to the continuous presence multicast group 22, the current speaker typically will see the last speaker (as opposed to the current speaker) as part of his or her continuous presence stream.
  • Apart from these different stream types, system 10 may accommodate endpoints 12 that support different video characteristics in terms of codec types, frame rates, bit rates, etc. Thus, system 10 may be able to transcode and transrate video streams so that the same stream (voice activated stream or continuous presence stream) may be sent to several endpoints 12 that support different video codecs. In a particular embodiment, several voice activated multicast groups 20 may be assigned to accommodate other various video characteristics, and several continuous presence multicast groups 22 may be assigned to accommodate other various video characteristics
  • Video conference participants, who are interested in receiving the voice activated streams or continuous presence streams, subscribe to the appropriate one of voice activated multicast group 20 or continuous presence multicast group 22. Endpoints 12 that subscribe to voice activated multicast group 20, are called voice activated clients (VA clients). As indicated in FIG. 1, endpoints 12 a and 12 d are VA client. Endpoints 12 that subscribe to continuous presence multicast group 22, are called continuous presence clients (CP clients). As indicated in FIG. 1, endpoints 12 b and 12 c are CP client.
  • As described above, the voice activated streams or continuous presence streams may include a current speaker (CS) stream and/or a last speaker (LS) stream. In a particular embodiment, the current speaker stream candidates are one or more streams from endpoints 12 with the loudest audio. In such an embodiment, stream controller 18 may select the stream that has the loudest audio and thus qualify as the current speaker stream. A current speaker stream becomes the last speaker stream when one of the other streams from endpoints 12 are selected to the be the current speaker stream. For example, in the particular embodiment in which the current speaker stream is the stream with the loudest audio, the current speaker stream becomes the last speaker stream when at least one of the other streams from endpoints 12 has a higher audio volume. To support VA clients, the current speaker stream is typically multicast so that any endpoint 12 requiring the stream can receive it, and the last speaker stream is typically unicast to endpoint 12 associated with the current speaker.
  • In operation, distributed video conference system 10 supports a mixed mode of both voice activated streams and continuous presence streams. For VA clients, endpoint 12 associated with the current speaker receives the last speaker video stream from endpoint 12 associated with the last speaker, and other endpoints 12 receive the current speaker video stream from endpoint 12 associated with the current speaker. Typically, stream controller 18 sends a signal to the media switch 14 that hosts endpoint 12 associated with the current speaker to instruct the media switch 14 to multicast its endpoint video stream to one or more endpoints 12 subscribed to voice activated multicast groups 20. In addition, stream controller 18 may send a signal to the media switch 14 that hosts endpoint 12 associated with the last speak to instruct the media switch 14 to unicast its video stream to endpoint 12 associated with the current speaker, via the hosting media switch 14. Last speaker endpoint 12 via its media switch 14 may unicast its video stream directly to current active speaker endpoint 12 via its media switch 14. In a particular embodiment, stream controller 18 may send the signal directly to endpoint 12 that has the required capabilities to participate the distributed video conference, such as multicast. Alternatively, last speaker endpoint 12 may communicate its video stream through a transcoder or transrater to current speaker endpoint 12. In a particular embodiment, system 10 may perform bandwidth sharing between the active speaker multicast stream and the last speaker unicast stream.
  • For CP clients, video bridge 16 generates two CP streams. One of the CP streams has the current speaker video stream as one of its composed videos (CPCS). The other CP video stream has the last speaker video stream as one of its composed videos (CPLS). Endpoint 12 associated with the current speaker receives the CPLS video stream that has the last speaker video stream as one of its composed videos, and other endpoints 12 receive the CPCS video stream that has the current speaker video stream as one of its composed videos. To generate these two CP video streams, video bridge 16 subscribes to voice activated multicast group 20, so that video bridge 16 receives the current speaker video stream. Moreover, if the current speaker is associated with a CP endpoint 12 b or 12 c, then stream controller 18 may send a signal to endpoint 12 (in one particular embodiment, via media switch 14) associated with the last speaker to instruct endpoint 12 to unicast its video stream to video bridge 16 so that video bridge 16 can use the last speaker stream to generate the CPLS for communication to endpoint 12 b or 12 c associated with the current speaker. The last speaker endpoint 12 may unicast its video stream directly to video bridge 16. Alternatively, last speaker endpoint 12 may communicate its video stream through a transcoder or transrater to video bridge 16.
  • In a particular embodiment, system 10 may perform bandwidth sharing between the active speaker multicast stream and the last speaker unicast stream received by the voice activated participant or video bridge 16. System 10 may also share bandwidth between the CPCS multicast stream and the unicast CPLS stream.
  • FIG. 2 illustrates video conferencing system 10 in which the current speaker is endpoint 12 a, which is a VA client. There are two multicast groups: (1) voice activated multicast group 20 for current speaker (CS) stream 50 and (2) continuous presence multicast group 22 for continuous presence, current speaker (CPCS) stream 52. In this example, endpoints 12 a and 12 d joined the video conference as a voice activated clients and subscribed to voice activated multicast group 20, and endpoints 12 b and 12 c joined the video conference as continuous presence clients and subscribed to continuous presence multicast group 22. Endpoint 12 a has been designated the current speaker (CS), and endpoint 12 c has been designated the last speaker (LS). In addition to endpoints 12 a and 12 d, video bridge 16 also joined the video conference as a voice activated clients and subscribed to voice activated multicast group 20.
  • Endpoint 12 a, the current speaker, multicasts its current speaker stream 50 to the other VA clients, which include endpoint 12 d and video bridge 16. As shown in FIG. 2, endpoint 12 a communicates its current speaker stream 50 to media switch 14 b, which communicates stream 50 via voice activated multicast group 20 to media switches 14 a and 14 c. Media switch 14 a communicates current speaker stream 50 to video bridge 16, and media switch 14 c communicates current speaker stream 50 to endpoint 12 d.
  • Video bridge 16 receives current speaker stream 50, generates continuous presence, current speaker (CPCS) stream 52, and multicasts continuous presence, current speaker (CPCS) stream 52 to the CP clients, which include endpoints 12 b and 12 c. Because video bridge 16 is a voice activated client, video bridge 16 receives current speaker stream 50 when that stream is multicast to the voice activated multicast group 20. Video bridge 16 uses current speaker stream 50 to generate continuous presence, current speaker (CPCS) stream 52. Continuous presence, current speaker (CPCS) stream 52 allows participants to view several streams simultaneously, one of which is the current speaker. The other steams may be fixed streams, streams from other endpoints 12, streams from a video presentation, a slideshow, streams from a computer, or streams from any other suitable visual representation. As shown in FIG. 2, video bridge 16 communicates continuous presence, current speaker (CPCS) stream 52 to media switch 14 a, which communicates stream 52 via continuous presence multicast group 22 to media switches 14 b and 14 c. Media switch 14 b communicates continuous presence, current speaker (CPCS) stream 52 to endpoint 12 b, and media switch 14 c communicates continuous presence, current speaker (CPCS) stream 52 to endpoint 12 c.
  • Endpoint 12 c, which is designated as the last speaker in this example, unicasts its last speaker (LS) stream 54 to endpoint 12 a, the VA client designated as the current speaker in this example. Endpoint 12 a is a VA client and thus would typically receive current speaker stream 50 from voice activated multicast group 20. Because endpoint 12 a is the current speaker, current speaker stream 50 would present the participant at endpoint 12 a with a video of himself or herself. Rather than present the participant at endpoint 12 a with a video of himself or herself, endpoint 12 a receives the stream 54 of the last speaker. Endpoint 12 c communicates last speaker stream 54 to media switch 14 c, which communicates it to media switch 14 b. Media switch 14 b communicates last speaker stream 54 to endpoint 12 a. Because the current speaker is a VA client rather than a CP client, video bridge 16 does not receive the unicast of last speaker stream 54 from the last speaker, endpoint 12 c.
  • Stream controller 18 communicates instructions to media switches 14 to control the processing and/or communication of streams 50, 52, and 54 as described above. In alternative embodiment, stream controller 18 may communicate instructions to endpoints 12 and video bridge 16 regarding the processing and/or communication of streams 50, 52, and 54. Stream controller 18 also may communicate instructions to control the processing or communication of the other streams that video bridge 16 combines with current speaker stream 50 to generate continuous presence, current speaker (CPCS) stream 52.
  • FIG. 3 illustrates an example video conferencing system in which the current speaker is a CP client. There are two multicast groups: (1) voice activated multicast group 20 for current speaker stream 60 and (2) continuous presence multicast group 22 for continuous presence, current speaker (CPCS) stream 62. In this example, endpoints 12 a and 12 d joined the video conference as a VA clients and subscribed to voice activated multicast group 20, and endpoints 12 b and 12 c joined the video conference as CP clients and subscribed to continuous presence multicast group 22. Endpoint 12 a has been designated the last speaker (LS), and endpoint 12 c has been designated the current speaker (CS). In addition to endpoints 12 a and 12 d, video bridge 16 also joined the video conference as a voice activated clients and subscribed to voice activated multicast group 20.
  • Endpoint 12 c, the current speaker, multicasts its current speaker stream 60 to the VA clients, which include endpoints 12 a and 12 d and video bridge 16. Endpoint 12 c communicates its current speaker stream 60 to media switch 14 c, which communicates stream 60 via voice activated multicast group 20 to media switches 14 a and 14 c. Media switch 14 c also communicates stream 60 to voice activated client, endpoint 14 d. Media switch 14 a communicates current speaker stream 60 to video bridge 16, and media switch 14 b communicates current speaker stream 60 to endpoint 12 a.
  • Video bridge 16 receives current speaker stream 60, generates continuous presence, current speaker (CPCS) stream 62, multicasts continuous presence, current speaker (CPCS) stream 62 to the CP clients, in this case, endpoint 12 b. Because video bridge 16 is a VA client, video bridge 16 receives current speaker stream 60 when that stream is multicast to the voice activated multicast group 20. Video bridge 16 uses current speaker stream 60 to generate continuous presence, current speaker (CPCS) stream 62. Continuous presence, current speaker (CPCS) stream 62 allows participants to view several streams simultaneously, one of which is the current speaker. The other steams may be fixed streams, streams from other endpoints 12, streams from a video presentation, a slideshow, streams from a computer, or streams from any other suitable visual representation. Video bridge 16 communicates continuous presence, current speaker (CPCS) stream 62 to media switch 14 a, which communicates stream 62 via continuous presence multicast group 22 to media switches 14 b. Media switch 14 b communicates continuous presence, current speaker (CPCS) stream 62 to endpoint 12 b.
  • Endpoint 12 a, which is designated as the last speaker in this example, unicasts its last speaker (LS) stream 64 to video bridge 16. The current speaker is associated which endpoint 12 c, a CP client. Endpoint 12 c, as a CP client, would typically receive continuous presence, current speaker (CPCS) stream 62 from continuous presence multicast group 22. Because endpoint 12 c is the current speaker, continuous presence, current speaker (CPCS) stream 62 would present the participant at endpoint 12 c with a video of himself or herself. Rather than present the participant at endpoint 12 c with a video of himself or herself, endpoint 12 c receives continuous presence, last speaker (CPLS) stream 66, which is the continuous presence stream with the last speaker video instead of the current speaker video. Endpoint 12 a communicates last speaker stream 64 to media switch 14 b, which communicates it to media switch 14 a. Media switch 14 a communicates last speaker stream 64 to video bridge 16. Because the current speaker, endpoint 12 b, is a CP client rather than a VA client, last speaker stream 64 is not unicast to the current speaker.
  • Video bridge 16 uses last speaker stream 64 to generate continuous presence, last speaker (CPLS) stream 66. Continuous presence, last speaker (CPLS) stream 66 is like continuous presence, current speaker (CPCS) stream 62 but includes last speaker stream 64 in place of current speaker stream 60. Continuous presence, last speaker (CPLS) stream 66 allows a CP client, who is the current speaker, to view several streams simultaneously, one of which is the last speaker. As with continuous presence, current speaker (CPCS) stream 62, the other steams in continuous presence, last speaker (CPLS) stream 66 may be fixed streams, streams from other endpoints 12, streams from a video presentation, a slideshow, streams from a computer, or streams from any other suitable visual representation. As shown in FIG. 3, video bridge 16 communicates continuous presence, last speaker (CPLS) stream 66 to media switch 14 a, which communicates stream 66 to media switches 14 c. Media switch 14 c communicates continuous presence, last speaker (CPLS) stream 66 to endpoint 12 c.
  • Stream controller 18 communicates instructions to media switches 14 to control the processing and/or communication of streams 60, 62, 64, and 66 as described above. In alternative embodiment, stream controller 18 may communicate instructions to endpoints 12 and video bridge 16 regarding the processing and/or communication of streams 60, 62, 64, and 66. Stream controller 18 also may communicate instructions to control the processing or communication of the other streams that video bridge 16 combines with current speaker stream 60 and last speaker stream 64 to generate continuous presence, current speaker (CPCS) stream 62 and continuous presence, last speaker (CPLS) stream 66.
  • FIG. 4 illustrates an example VA stream map table 70 that stream controller 18 may use to control voice activated multicast group 20 in the example video conferencing system 10 of FIG. 3. Table 70 identifies the video streams by their stream IDs in first column 72 and the media switch ID in second column 74
  • First row 76 of table 70 identifies the current speaker stream 60 that is multicast to the VA clients, such as endpoints 12 that subscribed to voice activated multicast group 20. In the example of FIG. 3, the current speaker is endpoint 12 c. First row 76 identifies stream 60 of endpoint 12 c as “EP 3-1 in,” which stands for the input stream of endpoint 3-1. First row 76 also identifies media switch 14 c (MS3), as the media switch that receives stream 60. Table 70 does not specify a target for the current speaker's video stream because, as mentioned above, current speaker stream 60 is multicast to the VA clients.
  • Second row 78 identifies last speaker stream 64. In the example of FIG. 3, the last speaker is endpoint 12 a. Second row identifies stream 64 of endpoint 12 a as “EP 2-1 in,” which stands for the input stream of endpoint 2-1. Second row 78 also identifies media switch 14 b (MS2), as the media switch that receives last speaker stream 64.
  • Third row 79 identifies the target of last speaker stream 64 identified in second row 78. In the example of FIG. 3, the target of the last speaker stream 64 is video bridge 16 (because the current speaker is a CP client). Third row 79 identifies the target as “VB-LS,” which is the name of the last speaker stream communicated from media switch 14 a to video bridge 16. Third row 79 also identifies the media switch to which the last speaker's video stream should be communicated, which is media switch 14 a (MS1).
  • FIG. 5 illustrates example CP stream map table 80 that stream controller 18 may use to control the continuous presence multitask group 22 in the example video conferencing system 10 of FIG. 3. Table 80 identifies the video streams by their stream IDs in first column 82 and media switch ID in second column 84
  • First row 86 of table 80 identifies continuous presence, current speaker (CPCS) stream 62 that is multicast to the CP clients (e.g., endpoints 12 subscribed to continuous presence multitask group 20). In the example of FIG. 3, video bridge 16 generates and communicates current speaker continuous presence (CPCS) stream 62 to media switch 14. First row 86 identifies this stream as “CPCS,” First row 76 also identifies media switch 14 a (MS1), as the media switch that receives this stream. Table 70 does not specify a target for this stream because, as mentioned above, continuous presence, current speaker (CPCS) stream 62 is multicast to the CP clients.
  • Second row 88 identifies continuous presence, last speaker (CPLS) stream 66. In the example of FIG. 3, video bridge 16 generates and communicates continuous presence, last speaker (CPLS) stream 66 to media switch 14(a). Second row 88 identifies this stream 66 as “CPLS.” Second row 88 also identifies media switch 14 a (MS1), as the media switch that receives this stream 66.
  • Third row 89 identifies the target of continuous presence, last speaker (CPLS) stream 66 in second row 88. In the example of FIG. 3, continuous presence, last speaker (CPLS) stream 66 is communicated to endpoint 12 c. Third row 79 identifies the target stream as “EP 3-1 out,” which stands for the output stream to endpoint 3-1. Third row 89 also identifies media switch 14 c (MS3) as the media switch 14 to which continuous presence, last speaker (CPLS) stream 66 should be communicated.
  • FIG. 6 is an example of a video conferencing system 100 implementing transcoding and/or transrating, in which the current speaker is a VA client. Similar to FIGS. 1-3, system 100 in FIG. 6 includes endpoints 12 a, 12 b, 12 c, 12 d, 12 e, 12 f, 12 g, and 12 h (generally, endpoints 12); media switches 14 a, 14 b, 14 c, 14 d, 14 e, 14 f, and 14 g (generally, media switches 14); video bridges 16 a and 16 b (generally, video bridges 16); stream controller 118, voice activated multicast groups 120 a and 120 b (generally, voice activated multicast group 120), and continuous presence multicast groups 122 a and 122 b (generally, continuous presence multicast group 122). Generally, these elements operate as described above in FIGS. 1-3. For example, voice activated multicast groups 120 a and 120 b multicast current speaker streams 150 a and 150 b, and continuous presence multicast groups 122 a and 122 b multicast continuous presence, current speaker (CPCS) streams 152 a and 152 b.
  • In FIG. 6, a first portion 102 of system 100 operates using one codec (in this example, H263), and a second portion 104 of system 100 operates using a different codec (in this example, H264). In system 100, the transcoding and/or transrating is implemented by a pair of virtual clients 106 and 108. Virtual client 106 is subscribed to voice activated multicast group 120 a, and virtual client 108 is subscribed to voice activated multicast group 120 b.
  • Endpoints 112 a, 112 d, 112 f, and 112 g joined the video conference as VA clients. Endpoints 112 a and 112 d subscribed to voice activated multicast group (H263) 120 a, and endpoints 112 f and 112 g subscribed to voice activated multicast group (H264) 120 b. In this example, endpoint 112 a has been designated the current speaker (CS).
  • Endpoints 112 b, 112 c, 112 e, and 112 h joined the video conference as CP clients. Endpoints 112 b and 112 c subscribed to continuous presence multicast group (H263) 122 a, and endpoints 112 e and 112 h subscribed to continuous presence multicast group (H264) 122 b. In this example, endpoint 112 h has been designated the last speaker (LS).
  • Video bridges 116 joined the video conference as VA clients. Video bridge 116 a subscribed to voice activated multicast group (H263) 120 a, and video bridge 116 b subscribed to voice activated multicast group (H264) 120 b.
  • Endpoint 112 a, the current speaker, multicasts its current speaker stream 150 a to the VA clients in portion 102 of system 100, which includes endpoint 112 d, video bridge 116 a, and virtual client 106. Endpoint 112 a communicates its current speaker stream 150 a to media switch 114 b, which communicates stream 150 a via voice activated multicast group 120 a to media switches 114 a, 114 b, and 114 c. Media switch 114 a communicates current speaker stream 150 a to video bridge 116 a, media switch 114 c communicates current speaker stream 150 a to endpoint 12 d, and media switch 114 d communicate current speaker stream 150 a to virtual client 106.
  • Virtual clients 106 and 108 transcode or transrate current speaker stream 150 a from the coding protocol supported by portion 102 of system 100 to the coding stream protocol supported by portion 104 of system 100. As a result, virtual client 106 and 108 generate current speaker stream 150 b, which correspond to current speaker stream 150 a. In this particular example, current speaker stream 150 a is H263, and current speaker stream 150 b is H264.
  • Virtual client 108 is the logical current speaker in portion 104 of system 100. Thus, consistent with the prior description of the voice activated implementation, virtual client 108 multicasts its current speaker stream 150 b to the VA clients in portion 104 of system 100, which include endpoints 112 f and 112 g, and video bridge 116 b. Virtual client 108 communicates current speaker stream 150 b to media switch 114 d, which communicates stream 150 b via voice activated multicast group 120 b to media switches 114 e, 114 f, and 114 g. Media switch 114 e communicates current speaker stream 150 b to endpoint 112 f, media switch 114 f communicates current speaker stream 150 b to endpoint 112 g, and media switch 114 g communicates current speaker stream 150 b to video bridge 116 b.
  • Video bridges 116 a and 116 b receive current speaker streams 150 a and 150 b, generate continuous presence, current speaker (CPCS) streams 152 a and 152 b, and multicasts continuous presence, current speaker (CPCS) streams 152 a and 152 b to the CP clients, which include endpoints 112 b, 112 c, 112 e, and 112 h. Because video bridge 116 a and 116 b are VA clients, video bridges 116 a and 116 b receive current speaker streams 150 a and 150 b when these streams are multicast to the voice activated multicast groups 120 a and 120 b. Video bridges 116 a and 116 b use current speaker streams 150 a and 150 b to generate continuous presence, current speaker (CPCS) streams 152 a and 152 b. Continuous presence, current speaker (CPCS) streams 152 a and 152 b allow participant to view several streams simultaneously, one of which is the current speaker. The other steams may be fixed streams, streams from other endpoints 112, streams from a video presentation, a slideshow, streams from a computer, or streams from any other suitable visual representation.
  • Video bridge 116 a communicates continuous presence, current speaker (CPCS) stream 152 a to media switch 114 a, which communicates stream 152 a via continuous presence multicast group 122 a to media switches 114 b and 114 c. Media switch 114 b communicates continuous presence, current speaker (CPCS) stream 152 a to endpoint 112 b, and media switch 114 c communicates continuous presence, current speaker (CPCS) stream 152 a to endpoint 112 c.
  • Video bridge 116 b communicates continuous presence, current speaker (CPCS) stream 152 b to media switch 114 g, which communicates stream 152 b via continuous presence multicast group 122 b to media switches 114 e and 114 f. Media switch 114 e communicates continuous presence, current speaker (CPCS) stream 152 b to endpoint 112 e, and media switch 114 f communicates continuous presence, current speaker (CPCS) stream 152 b to endpoint 112 h.
  • Endpoint 112 h, which is designated as the last speaker in this example, unicasts its last speaker (LS) stream 154 b to virtual client 108, which is the logical current speaker is portion 104 of system 100. Because virtual client 108 is a voice activated client, it would typically receive current speaker stream 150 b from voice activated multicast group 120 b. Because virtual client 108 is the logical current speaker in portion 104 of system 100, virtual client 108 receives last speaker stream 154 b of the last speaker. Endpoint 12 h communicates last speaker stream 154 b to media switch 114 f, which communicates it to media switch 14 d. Media switch 14 d communicates last speaker stream 154 b to virtual client 108.
  • Virtual clients 106 and 108 transcode or transrate last speaker stream 154 b from the coding or rating protocol supported by portion 104 of system 100 to the coding or rating protocol supported by portion 102 of system 100. As a result, virtual clients 106 and 108 generate last speaker stream 154 a, which correspond to last speaker stream 154 b. In this particular example, last speaker stream 154 a is H263, and last speaker stream 154 b is H264.
  • Virtual client 106, which is the logical last speaker in portion 102 of system 100, unicasts last speaker stream 154 a to endpoint 112 a, which is the current speaker. Endpoint 112 a is a VA client and thus would typically receive current speaker stream 150 a from voice activated multicast group 120 a. Because endpoint 112 a is the current speaker, current speaker stream 150 a would present the participant at endpoint 112 a with a video of himself or herself. Rather than present the participant at endpoint 112 a with a video of himself or herself, endpoint 112 a receives stream 154 a of the last speaker. Virtual client 106 communicates last speaker stream 154 a to media switch 114 d, which communicates it to media switch 114 b. Media switch 114 b communicates last speaker stream 154 a to endpoint 112 a.
  • Stream controller 118 communicates instructions to media switches 114 to control the processing and/or communication of streams 150, 152, and 154 as described above. In an alternative embodiment, stream controller 118 may communicate instructions to endpoints 12 and video bridges 16 regarding the processing and/or communication of streams 150, 152, and 154. Stream controller 118 also may communicate instructions to control the processing or communication of the other streams that video bridge 116 combines with current speaker stream 150 to generate continuous presence, current speaker (CPCS) stream 152.
  • FIG. 7 is an example of video conferencing system 100 implementing transcoding and/or transrating, in which the current speaker is a CP client. As in FIG. 6, system 110 in FIG. 7 includes endpoints 12 a, 12 b, 12 c, 12 d, 12 e, 12 f, 12 g, and 12 h (generally, endpoints 12); media switches 14 a, 14 b, 14 c, 14 d, 14 e, 14 f, and 14 g (generally, media switches 14); video bridges 16 a and 16 b (generally, video bridges 16); stream controller 118, voice activated multicast group 120 a and 120 b (generally, voice activated multicast group 120), and continuous presence multicast group 122 a and 122 b (generally, continuous presence multicast group 122). Voice activated multicast groups 120 a and 120 b multicast current speaker streams 150 a and 150 b, and continuous presence multicast groups 122 a and 122 b multicast continuous presence, current speaker (CPCS) streams 152 a and 152 b.
  • A first portion 102 of system 100 operates using one codec (in this example, H263), and a second portion 104 of system 100 operates using a different codec (in this example, H264). In system 100, the transcoding and/or transrating is implemented by a pair of virtual clients 106 and 108. Virtual client 106 is subscribed to voice activated multicast group 120 a, and virtual client 108 is subscribed to voice activated multicast group 120 b.
  • Endpoints 112 a, 112 d, 112 f, and 112 g joined the video conference as VA clients. Endpoints 112 a and 112 d subscribed to voice activated multicast group (H263) 120 a, and endpoints 112 f and 112 g subscribed to voice activated multicast group (H264) 120 b.
  • Endpoints 112 b, 112 c, 112 e, and 112 h joined the video conference as CP clients. Endpoints 112 b and 112 c subscribed to continuous presence multicast group (11263) 122 a, and endpoints 112 e and 112 h subscribed to continuous presence multicast group (11264) 122 b. In this example, endpoint 112 b has been designated the current speaker (CS), and endpoint 112 h has been designated the last speaker (LS).
  • Video bridges 116 joined the video conference as a VA client. Video bridge 116 a subscribed to voice activated multicast group 120 a, and video bridge 116 b subscribed to voice activated multicast group 120 b.
  • Endpoint 112 b, the current speaker, multicasts its current speaker stream 150 a to the VA clients in portion 102 of system 100, which includes endpoints 112 a and 112 d, video bridge 116 a, and virtual client 106. As shown in FIG. 6, endpoint 112 b communicates its current speaker stream 150 a to media switch 114 b, which communicates stream 150 a via voice activated multicast group 120 a to media switches 114 a, 114 c, and 114 d. Media switch 114 b also communicates stream 150 a to endpoint 112 a. Media switch 114 a communicates current speaker stream 150 a to video bridge 116 a, media switch 114 c communicates current speaker stream 150 a to endpoint 12 d, and media switch 114 d communicate current speaker stream 150 a to virtual client 106.
  • Virtual clients 106 and 108 transcode or transrate current speaker stream 150 a from the coding or rating protocol supported by portion 102 of system 100 to the coding or rating protocol supported by portion 104 of system 100. As a result, virtual client 106 and 108 generate current speaker stream 150 b, which corresponds to current speaker stream 150 a. In this particular example, current speaker stream 150 a is H263, and current speaker stream 150 b is H264.
  • Virtual client 108 is the logical current speaker in portion 104 of system 100. Thus, consistent with the prior description of the voice activated implementation, virtual client 108 multicasts its current speaker stream 150 b to the VA clients in portion 104 of system 100, which include endpoints 112 f and 112 g, and video bridge 116 b. As shown in FIG. 7, virtual client 108 communicates current speaker stream 150 b to media switch 114 d, which communicates stream 150 b via voice activated multicast group 120 b to media switches 114 e, 114 f, and 114 g. Media switch 114 e communicates current speaker stream 150 b to endpoint 112 f, media switch 114 f communicates current speaker stream 150 b to endpoint 12 g, and media switch 114 g communicate current speaker stream 150 b to video bridge 116 b.
  • Video bridges 116 a and 116 b receive current speaker stream 150 a and 150 b, generate continuous presence, current speaker (CPCS) streams 152 a and 152 b, and multicasts continuous presence, current speaker (CPCS) stream 152 a and 152 b to the CP clients, which include endpoints 112 c, 112 e, and 112 h. Because video bridge 116 a and 116 b are VA clients, video bridges 116 a and 116 b receive current speaker streams 150 a and 150 b when streams 150 a and 150 b are multicast to the voice activated multicast groups 120 a and 120 b. Video bridges 116 a and 116 b use current speaker streams 150 a and 150 b to generate continuous presence, current speaker (CPCS) streams 152 a and 152 b. Continuous presence, current speaker (CPCS) streams 152 a and 152 b allow participant to view several streams simultaneously, one of which is the current speaker. The other steams may be fixed streams, streams from other endpoints 112, streams from a video presentation, a slideshow, streams from a computer, or streams from any other suitable visual representation.
  • Video bridge 116 a communicates continuous presence, current speaker (CPCS) stream 152 a to media switch 14 a, which communicates stream 152 a via continuous presence multicast group 122 a to media switch 114 c. Media switch 114 c communicates continuous presence, current speaker (CPCS) stream 152 a to endpoint 112 c.
  • Video bridge 116 b communicates continuous presence, current speaker (CPCS) stream 152 b to media switch 14 g, which communicates stream 152 b via continuous presence multicast group 122 b to media switches 114 e and 114 f. Media switch 114 e communicates continuous presence, current speaker (CPCS) stream 152 b to endpoint 112 e, and media switch 114 f communicates continuous presence, current speaker (CPCS) stream 152 b to endpoint 112 h.
  • Endpoint 112 h, which is designated as the last speaker in this example, unicasts its last speaker (LS) stream 164 b to virtual client 108, which is the logical current speaker in portion 104 of system 100. Virtual client 108 is a VA client, so stream controller 118 instructs media switch 114 f to communicate last speaker stream 164 b to virtual client 108 via media switch 114 d rather than communicate it to video bridge 116 b.
  • Virtual clients 106 and 108 transcode or transrate last speaker stream 164 b from the coding protocol supported by portion 104 of system 100 to the coding protocol supported by portion 102 of system 100. As a result, virtual clients 106 and 108 generate last speaker stream 164 a, which correspond to last speaker stream 164 b. In this particular example, last speaker stream 164 a is H263, and last speaker stream 164 b is H264.
  • Virtual client 106, the virtual last speaker in portion 102 at system 100, unicasts last speaker stream 154 a to video bridge 116 a. The current speaker is associated which endpoint 112 b, a CP client. Endpoint 112 b, as a CP client, would typically receive continuous presence, current speaker (CPCS) stream 152 a from continuous presence multicast group 22. Because endpoint 112 b is the current speaker, continuous presence, current speaker (CPCS) stream 152 a would present the participant at endpoint 112 b with a video of himself or herself, Rather than present the participant at endpoint 112 b with a video of himself or herself, endpoint 112 b receives continuous presence, last speaker (CPLS) stream 166, which is the continuous presence stream with the last speaker video instead of the current speaker video. Virtual client 106 communicates last speaker stream 164 a to media switch 114 d, which communicates it to media switch 114 a. Media switch 114 a communicates last speaker stream 164 a to video bridge 116 a. Because the current speaker is a CP client rather than a VA client, last speaker stream 164 is not unicast to the current speaker, endpoint 112 b.
  • Video bridge 116 a uses last speaker stream 164 a to generate continuous presence, last speaker (CPLS) stream 166. Continuous presence, last speaker (CPLS) stream 166 is like continuous presence, current speaker (CPCS) stream 152 a but includes last speaker stream 164 a in place of current speaker stream 150 a. Continuous presence, last speaker (CPLS) stream 166 allows a continuous presence client, who is the current speaker, to view several streams simultaneously, one of which is the last speaker. As with continuous presence, current speaker (CPCS) stream 162 a, the other steams in continuous presence, last speaker (CPLS) stream 166 may be fixed streams, streams from other endpoints 12, streams from a video presentation, a slideshow, streams from a computer, or streams from any other suitable visual representation. As shown in FIG. 7, video bridge 116 a communicates continuous presence, last speaker (CPLS) stream 166 to media switch 114 a, which communicates stream 166 to media switches 114 b. Media switch 114 b communicates continuous presence, last speaker (CPLS) stream 166 to endpoint 12 b.
  • Stream controller 118 communicates instructions to media switches 114 to control the processing and/or communication of streams 150, 152, 164, and 166 as described above. In an alternative embodiment, stream controller 118 may communicate instructions to endpoints 12 and video bridges 16 regarding the processing and/or communication of streams 150, 152, 164, and 166. Stream controller 118 also may communicate instructions to control the processing or communication of the other streams that video bridge 116 combines with current speaker stream 150 and last speaker stream 164 to generate continuous presence, current speaker (CPCS) stream 152 and continuous presence, last speaker (CPLS) stream 166.
  • FIG. 8 illustrates an example method for video conferencing. The method begins at step 200, where stream controller 18 receives requests from VA clients, including endpoints 12 a and 12 d and video bridge 16, to subscribe to voice activated multicast group 20. At step 210, stream controller 18 receives request from CP clients, including endpoints 12 b and 12 c, to subscribe to continuous presence multicast group 22.
  • At step 220, stream controller 18 determines whether the current speaker is a VA client. If the current speaker is a VA client, the method continues at step 230, where stream controller 18 instructs the current speaker (endpoint 12 or media switch 14 associated with current speaker) to multicast the current speaker's stream 50 to VA clients (including video bridge 16), except the current speaker. At step 240, stream controller 18 instructs the last speaker (endpoint 12 or media switch 14 associated with last speaker) to communicate the last speaker's stream 54 to the current speaker (endpoint 12 or media switch 14 associated with current speaker). Stream controller 18 instructs video bridge 16 to generate continuous presence, current speaker stream 52 at step 250 and to multicast stream 52 to CP clients, including endpoints 12 b and 12 c. The method returns to step 220, to determine if the current speaker is still a VA client.
  • If the current speaker is not a VA client at step 220, the current speaker is a CP client, and at step 270, stream controller 18 instructs the current speaker (endpoint 12 or media switch 14 associated with current speaker) to multicast the current speaker's stream 60 to VA clients (including video bridge 16). Stream controller 18 instructs video bridge 16 generates continuous presence, current speaker stream 62 at step 280 and to multicast stream 62 to CP clients, except the current speaker, at step 290. At step 300, stream controller 18 instructs the last speaker (endpoint 12 or media switch 14 associated with last speaker) to communicate the last speaker's stream 64 to video bridge 16. Stream controller 18 instructs video bridge 16 to generate continuous presence, last speaker stream 66 at step 310 and to communicate stream 66 to current speaker at step 320. The method returns to step 220, to determine if the current speaker is a VA client.
  • FIG. 9 illustrates an example method of generating VA and CP stream map tables 70 and 80. In a particular embodiment, stream controller 18 may use this method or another method to generate stream map tables, such the example tables illustrated in FIGS. 4 and 5.
  • The method begin at step 400, where stream controller 18 initializes VA stream map table 70 and CP stream map table 80. In a particular embodiment, stream controller 18 creates VA stream map table 70 with 1st speaker entry 76 for identifying the current speaker stream, 2nd speaker entry 78 for identifying the last speaker stream, and 2nd speaker target entry 79 for identifying the intended recipient of the last speaker stream. In a particular embodiment, stream controller 18 creates CP stream map 80 with 1st speaker entry 86 identifying continuous presence, current speaker stream, 2nd speaker entry 87 identifying continuous presence, last speaker stream, and 2nd speaker target entry 88 for identifying the intended recipient of the continuous presence, last speaker stream. In a particular embodiment, 1st speaker entry 86 includes information, such as stream ID 82 and media switch ID 84, identifying continuous presence, current speaker stream, and 2nd speaker entry 87 includes information, such as stream ID 82 and media switch ID 84, identifying continuous presence, last speaker stream.
  • At step 410, stream controller 18 receives requests from VA clients, such as endpoints 12 a and 12 d and video bridge 16, to subscribe to voice activated multicast group 20. At step 420, stream controller 18 receives request from CP clients, such as endpoints 12 b and 12 c, to subscribe to continuous presence multicast group 22.
  • At step 430, stream controller 18 receives information identifying a new current speaker. In a particular embodiment, an external device, such as an audio mixer, provides this information. In response to receiving the information identifying the new current speaker, stream controller 18 moves the information from the 1st speaker entry 76 to 2nd speaker entry 78 of VA stream map table 70 at step 440. This change is made to indicate that the old current speaker is the new last speaker. At step 450, stream controller enters current speaker information into the 1st speaker entry 76 of VA stream map table 70. In a particular embodiment, the current speaker information includes the stream ID 72 and media switch ID 74 associated with the current speaker.
  • At step 460, stream controller 18 determines whether the new current speaker is a VA client. If the new current speaker is a VA client, the method continues at step 470, where stream controller 18 makes the current speaker the target of the 2nd speaker stream of VA stream map table 70 at step 470. In a particular embodiment, stream controller 18 identifies current speaker in 2nd speaker target 79 of VA stream map table 70 by entering the current speaker's input stream ID 72 and media switch ID 74. At step 480, stream controller 18 provides no 2nd speaker stream target 88 in CP stream map table 80.
  • If the new current speaker is not a VA client at step 460, then the new current speaker is a CP client, and the method continues at step 490, where stream controller 18 makes video bridge 16 the 2nd speaker target 79 in VA stream map 70. At step 500, stream controller 18 makes the current speaker the 2nd speaker target 88 of CP stream map 80.
  • At step 510, stream controller 18 communicates VA stream map table 70 and CP stream map table 80 to media switches 14. In a particular embodiment, stream controller 18 communicates tables 70 and 80 to other network devices supporting the video conference. In a particular embodiment, stream controller 18 communicates only information that has changed in tables 70 and 80. The method returns to step 430 to wait for information identifying a new current speaker.
  • FIG. 10 illustrates an example method for communicating video streams at media switch 14 in support of a video conference. The method begins at step 600, where media switch 14 receives instructions from stream controller 18. In a particular embodiment, the instructions comprise VA stream map table 70 and CP stream map table 80.
  • If media switch 14 is associated with the last speaker identified in the instructions at step 610, media switch 14 unicasts the last speaker stream to the last speaker target identified in the instructions at step 620. In a particular embodiment, media switch 14 determines whether it is associated with the last speaker by determining whether it is identified in the media switch ID 74 of 2nd speaker entry 78 of VA map stream table 70. In a particular embodiment, media switch 14 unicasts the last speaker stream to the 2nd speaker target 79 identified in VA stream map table 70.
  • If media switch 14 is associated with the current speaker identified in the instructions at step 630, media switch 14 multicasts the current speaker stream to the VA multicast group 20 at step 640. In a particular embodiment, media switch 14 determines whether it is associated with the current speaker by determining whether it is identified in the media switch ID 74 of 1st speaker entry 76 of VA map stream table 70. If the current speaker is a VA client at step 650, media switch 14 communicates the last speaker stream to the current speaker endpoint 12 at step 660. In a particular embodiment, media switch 14 receives the last speaker stream from another media switch 14 or other network device and communicates the stream to current speaker endpoint 12. If the current speaker is not a VA client at step 650, then the current speaker is a CP client, and media switch 14 communicates continuous presence, last speaker stream to the current speaker endpoint 12 at step 670. In a particular embodiment, media switch 14 receives the continuous presence, last speaker stream from video bridge 16 or other network device and communicates the stream to current speaker endpoint 12.
  • If media switch 14 is supporting VA clients (other than the current speaker) at step 680, media switch 14 communicates the current speaker stream to the other VA clients at step 690. In a particular embodiment, media switch 14 receives the current speaker stream from another media switch 14 or other network device that multicast the current speaker stream to voice activated multicast group 20, and media switch 14 communicates the current speaker stream to one or more endpoints 12 that subscribed to voice activated multicast group 20.
  • If media switch 14 is supporting CP client (other than the current speaker) at step 700, media switch 14 communicates the continuous presence, current speaker stream to the other CP clients at step 710. In a particular embodiment, media switch 14 receives the continuous presence, current speaker stream from video bridge 16 or other network device that multicast the continuous presence, current speaker stream to continuous presence multicast group 22, and media switch 14 communicates the continuous presence, current speaker stream to one or more endpoints 12 that subscribed to continuous presence multicast group 22.
  • The method continues at step 600, when the media switch 14 receives new instruction from stream controller 18.
  • The present disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described herein that a person having ordinary skill in the art would comprehend. Similarly, where appropriate, the appended claims encompass all changes, substitutions, variations, alterations, and modifications to the example embodiments described herein that a person having ordinary skill in the art would comprehend.

Claims (20)

1. An apparatus, comprising:
a first module operable to receive a request from a first endpoint to subscribe to a voice activated multicast group, to cause the first endpoint to receive a current speaker's video stream if the first endpoint is not the current speaker and to receive a last speaker's video stream if the first endpoint is the current speaker; and
a second module operable to receive a request from a second endpoint to subscribe to a continuous presence multicast group, to cause the second endpoint to receive a continuous presence, current speaker video stream if the second endpoint is not the current speaker and to receive a continuous presence, last speaker video stream if the second endpoint is the current speaker,
wherein the continuous presence, current speaker video stream comprises two or more video streams, one of which includes at least a portion of the current speaker's video stream,
wherein the continuous presence, last speaker video stream comprises two or more video streams, one of which includes at least a portion of a last speaker's video stream.
2. The apparatus of claim 1, wherein the first module causes the current speaker's endpoint to multicast the current speaker's video stream to two or more other endpoints, including at least the first endpoint if the first endpoint is not the current speaker.
3. The apparatus of claim 1, wherein the first module causes the last speaker's endpoint to unicast the last speaker's stream to the first endpoint if the first endpoint is the current speaker.
4. The apparatus of claim 1, wherein the second module causes a video bridge to multicast the continuous presence, current speaker video stream to two or more endpoints, including the second endpoint if the second endpoint is not the current speaker.
5. The apparatus of claim 1, where in the second module causes a video bridge to unicast the continuous presence, last speaker video stream to the second endpoint if the second endpoint is the current speaker.
6. The apparatus of claim 1, wherein the first module and second module communicate instructions to one or more media switches to control communication of the current speaker's video stream, the last speaker's video stream, the continuous presence, current speaker video stream, and the continuous presence, last speaker video stream.
7. (canceled)
8. (canceled)
9. (canceled)
10. (canceled)
11. (canceled)
12. (canceled)
13. (canceled)
14. (canceled)
15. (canceled)
16. (canceled)
17. (canceled)
18. (canceled)
19. (canceled)
20. (canceled)
US13/471,288 2007-12-20 2012-05-14 System and method for video conferencing Active US8619118B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/471,288 US8619118B2 (en) 2007-12-20 2012-05-14 System and method for video conferencing
US14/098,059 US9137486B2 (en) 2007-12-20 2013-12-05 System and method for video conferencing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/961,710 US8179422B2 (en) 2007-12-20 2007-12-20 System and method for video conferencing
US13/471,288 US8619118B2 (en) 2007-12-20 2012-05-14 System and method for video conferencing

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/961,710 Division US8179422B2 (en) 2007-12-20 2007-12-20 System and method for video conferencing

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/098,059 Continuation US9137486B2 (en) 2007-12-20 2013-12-05 System and method for video conferencing

Publications (2)

Publication Number Publication Date
US20120262538A1 true US20120262538A1 (en) 2012-10-18
US8619118B2 US8619118B2 (en) 2013-12-31

Family

ID=40788102

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/961,710 Active 2030-12-05 US8179422B2 (en) 2007-12-20 2007-12-20 System and method for video conferencing
US13/471,288 Active US8619118B2 (en) 2007-12-20 2012-05-14 System and method for video conferencing
US14/098,059 Active US9137486B2 (en) 2007-12-20 2013-12-05 System and method for video conferencing

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/961,710 Active 2030-12-05 US8179422B2 (en) 2007-12-20 2007-12-20 System and method for video conferencing

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/098,059 Active US9137486B2 (en) 2007-12-20 2013-12-05 System and method for video conferencing

Country Status (1)

Country Link
US (3) US8179422B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140028785A1 (en) * 2012-07-30 2014-01-30 Motorola Mobility LLC. Video bandwidth allocation in a video conference
US9007465B1 (en) 2012-08-31 2015-04-14 Vce Company, Llc Obtaining customer support for electronic system using first and second cameras
CN112104833A (en) * 2019-10-17 2020-12-18 越朗信息科技(上海)有限公司 Audio and video integrated conference system and privacy realization method thereof

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8081205B2 (en) * 2003-10-08 2011-12-20 Cisco Technology, Inc. Dynamically switched and static multiple video streams for a multimedia conference
US8713105B2 (en) * 2006-01-03 2014-04-29 Cisco Technology, Inc. Method and apparatus for transcoding and transrating in distributed video systems
US7962640B2 (en) * 2007-06-29 2011-06-14 The Chinese University Of Hong Kong Systems and methods for universal real-time media transcoding
US8514265B2 (en) * 2008-10-02 2013-08-20 Lifesize Communications, Inc. Systems and methods for selecting videoconferencing endpoints for display in a composite video image
US8411129B2 (en) * 2009-12-14 2013-04-02 At&T Intellectual Property I, L.P. Video conference system and method using multicast and unicast transmissions
US8995306B2 (en) 2011-04-06 2015-03-31 Cisco Technology, Inc. Video conferencing with multipoint conferencing units and multimedia transformation units
US20130038678A1 (en) * 2011-08-08 2013-02-14 Emc Satcom Technologies, Llc Video management system over satellite
US9392337B2 (en) 2011-12-22 2016-07-12 Cisco Technology, Inc. Wireless TCP link state monitoring based video content adaptation and data delivery
US9118807B2 (en) 2013-03-15 2015-08-25 Cisco Technology, Inc. Split frame multistream encode
US9215413B2 (en) 2013-03-15 2015-12-15 Cisco Technology, Inc. Split frame multistream encode
US20140375755A1 (en) * 2013-06-24 2014-12-25 Electronics And Telecommunications Research Institute Apparatus and method for changing main screen based on distributed telepresence
KR20150011886A (en) * 2013-07-23 2015-02-03 한국전자통신연구원 Method and apparatus for distribute vide conference focused on participants
GB201510672D0 (en) * 2015-06-17 2015-07-29 Cyviz As Video conferencing control systems
JP6387972B2 (en) * 2016-01-25 2018-09-12 ブラザー工業株式会社 COMMUNICATION METHOD, COMMUNICATION SYSTEM, AND COMMUNICATION PROGRAM

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070153712A1 (en) * 2006-01-05 2007-07-05 Cisco Technology, Inc. Method and architecture for distributed video switching using media notifications
US20070186002A1 (en) * 2002-03-27 2007-08-09 Marconi Communications, Inc. Videophone and method for a video call

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01243767A (en) 1988-03-25 1989-09-28 Toshiba Corp Conference call system
US5007046A (en) 1988-12-28 1991-04-09 At&T Bell Laboratories Computer controlled adaptive speakerphone
US5058153A (en) 1989-12-27 1991-10-15 Carew Edward C Noise mitigation and mode switching in communications terminals such as telephones
JPH07202887A (en) 1993-12-28 1995-08-04 Toshiba Corp Distributed conference system
US5844600A (en) 1995-09-15 1998-12-01 General Datacomm, Inc. Methods, apparatus, and systems for transporting multimedia conference data streams through a transport network
US5848098A (en) 1996-07-09 1998-12-08 Lucent Technologies, Inc. Personal base station extension calling arrangement
US6128649A (en) 1997-06-02 2000-10-03 Nortel Networks Limited Dynamic selection of media streams for display
US6148068A (en) 1997-10-20 2000-11-14 Nortel Networks Limited System for managing an audio conference
US6078809A (en) 1998-02-27 2000-06-20 Motorola, Inc. Method and apparatus for performing a multi-party communication in a communication system
US6535604B1 (en) 1998-09-04 2003-03-18 Nortel Networks Limited Voice-switching device and method for multiple receivers
US6327276B1 (en) 1998-12-22 2001-12-04 Nortel Networks Limited Conferencing over LAN/WAN using a hybrid client/server configuration
US6300973B1 (en) 2000-01-13 2001-10-09 Meir Feder Method and system for multimedia communication control
US6590604B1 (en) 2000-04-07 2003-07-08 Polycom, Inc. Personal videoconferencing system having distributed processing architecture
US7362349B2 (en) 2002-07-10 2008-04-22 Seiko Epson Corporation Multi-participant conference system with controllable content delivery using a client monitor back-channel
WO2004044710A2 (en) * 2002-11-11 2004-05-27 Supracomm, Inc. Multicast videoconferencing
US8659636B2 (en) 2003-10-08 2014-02-25 Cisco Technology, Inc. System and method for performing distributed video conferencing
CA2537944C (en) * 2003-10-08 2010-11-30 Cisco Technology, Inc. System and method for performing distributed video conferencing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070186002A1 (en) * 2002-03-27 2007-08-09 Marconi Communications, Inc. Videophone and method for a video call
US20080273079A1 (en) * 2002-03-27 2008-11-06 Robert Craig Campbell Videophone and method for a video call
US20070153712A1 (en) * 2006-01-05 2007-07-05 Cisco Technology, Inc. Method and architecture for distributed video switching using media notifications

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140028785A1 (en) * 2012-07-30 2014-01-30 Motorola Mobility LLC. Video bandwidth allocation in a video conference
US9118940B2 (en) * 2012-07-30 2015-08-25 Google Technology Holdings LLC Video bandwidth allocation in a video conference
US9007465B1 (en) 2012-08-31 2015-04-14 Vce Company, Llc Obtaining customer support for electronic system using first and second cameras
US10652289B1 (en) 2012-08-31 2020-05-12 EMC IP Holding Company LLC Combining data and video communication for customer support of electronic system
CN112104833A (en) * 2019-10-17 2020-12-18 越朗信息科技(上海)有限公司 Audio and video integrated conference system and privacy realization method thereof

Also Published As

Publication number Publication date
US8619118B2 (en) 2013-12-31
US8179422B2 (en) 2012-05-15
US20090160929A1 (en) 2009-06-25
US9137486B2 (en) 2015-09-15
US20140085405A1 (en) 2014-03-27

Similar Documents

Publication Publication Date Title
US9137486B2 (en) System and method for video conferencing
US8289368B2 (en) Intelligent grouping and synchronized group switching for multimedia conferencing
CN1849824B (en) System and method for performing distributed video conferencing
EP1678951B1 (en) System and method for performing distributed video conferencing
KR100880150B1 (en) Multi-point video conference system and media processing method thereof
US7321384B1 (en) Method and apparatus for using far end camera control (FECC) messages to implement participant and layout selection in a multipoint videoconference
Singh et al. Centralized conferencing using SIP
US8614732B2 (en) System and method for performing distributed multipoint video conferencing
US7627629B1 (en) Method and apparatus for multipoint conferencing
US8149261B2 (en) Integration of audio conference bridge with video multipoint control unit
US9596433B2 (en) System and method for a hybrid topology media conferencing system
US7653013B1 (en) Conferencing systems with enhanced capabilities
JP2007329917A (en) Video conference system, and method for enabling a plurality of video conference attendees to see and hear each other, and graphical user interface for videoconference system
WO2011149359A1 (en) System and method for scalable media switching conferencing
US9270474B2 (en) Endpoint initiation of multipart conferences
Balaouras et al. Potential and limitations of a teleteaching environment based on H. 323 audio-visual communication systems
JP3748952B2 (en) Multipoint connection network construction method and multipoint connection network system
JP6692922B2 (en) Video conferencing server and method capable of providing multi-screen video conferencing using a plurality of video conferencing terminals
Schmidt et al. Teleconferencing for the EFDA laboratories
KR20110067435A (en) Video conference system having content sharing functionality and method for content sharing
Baurens Groupware

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8