Knowledge Base For Broadcast Techies
All about broadcast technology terminologies we are into
Teleprompter, also known as an autocue, is a display device that prompts the person speaking with an electronic visual text of a speech or script.
Using a teleprompter is similar to using cue cards. The screen is in front of, and usually below, the lens of a professional video camera, and the words on the screen are reflected to the eyes of the presenter using a sheet of clear glass or other beam splitter, so that they are read by looking directly at the lens position, but are not imaged by the lens. Light from the performer passes through the front side of the glass into the lens, while a shroud surrounding the lens and the back side of the glass prevents unwanted light from entering the lens. Mechanically this works in a very similar way to the Pepper’s ghost illusion from classic theatre: an image viewable from one angle but not another.
See our related products:
Dynamic Adaptive Streaming over HTTP (DASH), also known as MPEG-DASH, is an adaptive bitrate streaming technique that enables high quality streaming of media content over the Internet delivered from conventional HTTP web servers. Similar to Apple’s HTTP Live Streaming (HLS) solution, MPEG-DASH works by breaking the content into a sequence of small segments, which are served over HTTP. Each segment contains a short interval of playback time of content that is potentially many hours in duration, such as a movie or the live broadcast of a sports event. The content is made available at a variety of different bit rates, i.e., alternative segments encoded at different bit rates covering aligned short intervals of playback time. While the content is being played back by an MPEG-DASH client, the client uses a bit rate adaptation (ABR) algorithm[1] to automatically select the segment with the highest bit rate possible that can be downloaded in time for playback without causing stalls or re-buffering events in the playback.[2] The current MPEG-DASH reference client dash.js[3] offers both buffer-based (BOLA[4]) and hybrid (DYNAMIC[2]) bit rate adaptation algorithms. Thus, an MPEG-DASH client can seamlessly adapt to changing network conditions and provide high quality playback with few stalls or re-buffering events.
See our related products:
Ultra-high-definition television (also known as Ultra HD television, Ultra HD, UHDTV, UHD and Super Hi-Vision) today includes 4K UHD and 8K UHD, which are two digital video formats with an aspect ratio of 16:9. These were first proposed by NHK Science & Technology Research Laboratories and later defined and approved by the International Telecommunication Union (ITU).[1][2][3][4] It is a digital television (DTV) standard, and the successor to high-definition television (HDTV), which in turn was the successor to standard-definition television (SDTV).
The Consumer Electronics Association announced on October 17, 2012, that “Ultra High Definition”, or “Ultra HD”, would be used for displays that have an aspect ratio of 16:9 or wider and at least one digital input capable of carrying and presenting native video at a minimum resolution of 3840×2160 pixels.[5][6] In 2015, the Ultra HD Forum was created to bring together the end-to-end video production ecosystem to ensure interoperability and produce industry guidelines so that adoption of ultra-high-definition television could accelerate. From just 30 in Q3 2015, the forum published a list up to 55 commercial services available around the world offering 4K resolution.[7]
See the related products:
Web Real-Time Communications (WebRTC) is an open source project created by Google to enable peer-to-peer communication in web browsers and mobile applications through application programming interfaces. This includes audio, video, and data transfers.
it is a free, open-source project that provides web browsers and mobile applications with real-time communication (RTC) via simple application programming interfaces (APIs). It allows audio and video communication to work inside web pages by allowing direct peer-to-peer communication, eliminating the need to install plugins or download native apps.[3] Supported by Apple, Google, Microsoft, Mozilla, and Opera, WebRTC is being standardized through the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF).[4]
Its mission is to “enable rich, high-quality RTC applications to be developed for the browser, mobile platforms, and IoT devices, and allow them all to communicate via a common set of protocols“.[4]
See the related products:
The Media Object Server (MOS) protocol allows newsroom computer systems (NCS) to communicate using a standard protocol with video servers, audio servers, still stores, and character generators for broadcast production.
The MOS protocol is based on XML.[3] It enables the exchange of the following types of messages:[4]
- Descriptive Data for Media Objects.
- The MOS “pushes” descriptive information and pointers to the NCS as objects are created, modified, or deleted in the MOS. This allows the NCS to be “aware” of the contents of the MOS and enables the NCS to perform searches on and manipulate the data the MOS has sent.
- Playlist Exchange.
- The NCS can build and transfer playlist information to the MOS. This allows the NCS to control the sequence that media objects are played or presented by the MOS.
- Status Exchange.
- The MOS can inform the NCS of the status of specific clips or the MOS system in general. The NCS can notify the MOS of the status of specific playlist items or running orders.
MOS was developed to reduce the need for the development of device specific drivers. By allowing developers to embed functionality and handle events, vendors were relieved of the burden of developing device drivers. It was left to the manufacturers to interface newsroom computer systems. This approach affords broadcasters flexibility to purchase equipment from multiple vendors.[5] It also limits the need to have operators in multiple locations throughout the studio as, for example, multiple character generators (CG) can be fired from a single control workstation, without needing an operator at each CG console.
MOS enables journalists to see, use, and control media devices inside Associated Press‘s ENPS system so that individual pieces of newsroom production technology speak a common XML-based language.
See our related products:
NewsWrap Newsroom Computer System (NRCS) encompasses the ingestion of wires, the logging & sorting of media elements, scripting, editing and approval of stories. The NRCS system takes care for all the key areas of the broadcast chain – collaborating, exchanging information and getting on air quickly.
The NRCS system takes care for all the key areas of the broadcast chain – collaborating, exchanging information and getting on air quickly. The NewsWrap NRCS system has been designed keeping in mind the journalists and the user friendly easy to use interface makes it the preferred solution for news management. The system brings together all the media used in today’s fast moving news presentations. It also organizes live or recorded news content such as video, texts, stills, news agency stories, CG, graphics, etc. for inclusion into the run order.
The NRCS is web based and the same platform can also be extended to publish content to radio, online and print.
See the related products:
NDI 4.5 (current version) adds support in iOS for real-time, full frame-rate, and resolution capture of the display on wireless with NDI®|HX Capture for iOS. In addition, the NDI®|HX Camera for iOS app turns any iPhone into a full 4K wireless camera, giving it the same capabilities as a high-end video camera.
Network Device Interface (NDI) is a royalty-free software standard developed by NewTek to enable video-compatible products to communicate, deliver, and receive high-definition video over a computer network in a high-quality, low-latency manner that is frame-accurate and suitable for switching in a live production …
See the related products: