close
close
header.skip_to_main
888.201.9056

8 AM - 5 PM CST M-F

Benefits of Video Conferencing Cameras with Multiple Lenses

A camera is just a camera, right? Well, no.

Think of an iPhone Pro. The current model, iPhone 16 Pro, is four cameras in one: selfie camera, standard wide-angle camera, ultra-wide-angle camera, and telephoto camera.

A similar thing is happening in the world of Video Conferencing Cameras and Video Bars. There’s an increasing number of multi-lens cameras for business communications.

A camera, in this case, is really cameras.

Multi-lens video conferencing cameras are used for four different functions:

  • Supporting multiple camera types in one device
  • Improving or enabling advanced video features
  • Having a camera in the center of the meeting table to get better angles on people’s faces
  • Creating ultra-wide video by stitching together multiple video feeds in real-time

Some cameras and video bars support more than one of these functions.

In this blog, we go through each of these multi-lens camera for video conferencing functions.

Let’s get into it!

Yealink MeetingBar A50

Yealink MeetingBar A50

Multi-Lens Video Conferencing Cameras: Multiple Cameras in One

With video conferencing, there are two basic shots that are important to achieve.

You don’t need Children of Men style epic one-shots or massive Dune style battle sequences and worm rides. Leave those shots to Hollywood.

What you need in the meeting room are close-ups of speakers and well-framed group shots.

These two shots are best achieved by two different types of cameras: telephoto cameras for close-ups and wide-angle/panoramic cameras for group shots.

But paying for two different cameras isn’t exactly ideal.

So, what if the two cameras were one camera?

Let’s use the upcoming Yealink MeetingBar A50 as an example. This video bar is actually three cameras in one: one wide-angle camera and two telephoto cameras. Each camera has its own 50 MP sensor.

This means that you get two cameras for picking out individual faces and one camera for general overview — and all three offer exceptional detail. It truly is like buying three cameras for the price of one. Plus, because it’s a video bar, you also get a 4-element stereo speaker system, a 16-element microphone array, and more.

Yealink A50 also supports IntelliFocus, which is one of the advanced feature types that multi-lens cameras are so good for.

Yealink SmartVision 60

Yealink SmartVision 60

Multi-Lens Video Conferencing Cameras: Advanced Features

Multi-lens cameras are useful, because they can improve or enable advanced video features.

Let’s start with how they can improve features.

Some multi-lens cameras don’t actually use all their cameras for taking video feeds. Instead, they use a camera or two to improve the automatic camera technologies that have become table-stakes for professional video calls in recent years: speaker tracking, group framing, and so on.

These technologies are designed to provide an experience akin to having a director cutting between video feeds to show the best one at the right time.

But no one wants to pay for the Monday Night Football crew to handle the various feeds. Having the automatic technologies is much, much better. You probably don’t want slow-motion replays of your video calls, anyways.

So, in the absence of Joe Buck and Troy Aikman, you get a multi-lens video conferencing camera for automatic tracking and framing.

Yealink UVC86 is a great example. It is a mechanical PTZ camera with 12x optical zoom, plus a panoramic camera on its base. The panoramic camera is used to improve the automatic features by helping the camera more accurately sense where active speakers are and where the edges of the group are, so the PTZ camera can quickly and accurately zoom in on and frame people.

The multiple cameras provide a better video call experience.

They can also help with more advanced features, in particular a feature that’s becoming more common. It has a few names, so we’ll describe it first: the camera (or video conferencing system) isolates individual faces of meeting participants into individual feeds, even though there is only one master feed.

You can think of it like this: you and three friends took a picture in front of Devil’s Tower in Wyoming. (Probably there’s no Close Encounters style UFO behind you. Probably.) Each of you wants an individual portrait from this picture, so you crop four separate pictures. You get a photo with all four of you together, plus four individual portraits.

With video conferencing, where you want to see individual’s faces and the whole group at the same time, this technology is extremely useful.

There are, as we said, a few names for this technology: Yealink IntelliFocus, Jabra Dynamic Composition, and Microsoft Multi-Stream IntelliFrame are three of them.

Microsoft Teams IntelliFrame goes one step further by using AI to automatically add the name of the person being shown.

While, strictly speaking, a multi-lens camera isn’t necessary for these technologies, they’re greatly improved by them. The processing power required by a multi-lens camera to deal with multiple video feeds provides the processing power to enable these face-isolation features. And having multiple lenses helps cover various scenarios better.

One way to improve isolated video feeds is to stick a multi-lens camera in the middle of the group.

Logitech Sight

Logitech Sight

Multi-Lens Video Conferencing Cameras: Central Tabletop Camera

Another type of multi-lens camera is a camera that is positioned on a table in the middle of a group sitting around a conference table. The camera has multiple lenses that face in different directions.

These cameras don’t have a common name yet. Yealink, for example, calls MTower a center-view camera. Poly, on the other hand, calls their upcoming Studio E360 a center-of-table or 360° camera. And Logitech calls Sight a tabletop camera. So, the terminology is still a bit of a mess.

Whatever you call it, however, the utility of this kind of multi-lens camera is the same.

When you have a video conferencing camera installed at one end of the room, everyone in the room needs to be facing it. This is good for the video feed but can be bad for collaboration, because people in the room should be able to face each other. That’s how we naturally want to speak with each other.

But if people in the group look at each other, with a front-of-room camera, the far-end participants watching video feed just see the sides of people’s heads. While it might be nice to admire sideburns and earrings, it’s not ideal for collaboration.

So, a central tabletop camera sits in the middle of the group in the room. The camera uses its multiple lenses to show a front-facing view of all the meeting participants.

That’s why the cameras face different directions.

Logitech Sight, for example, has two wide-angle lenses for a total video capture range of 315°. This companion camera to Logitech Rally Bar, can intelligently frame up to 4 participants, and because it’s in the middle of the group, it gets a much better angle on people’s faces than a front-of-room camera.

Better still, Sight works with a Logitech video bar, so the video bar provides a group view while Sight provides individual views — what Logitech calls a front-and-center solution.

These cameras are particularly useful in rooms with smaller conference tables. As room size gets bigger, people will sit farther away from the central camera, and they’re not as effective.

Another multi-lens camera solution for small rooms is a camera that stitches multiple feeds together.

Jabra Panacast 40

Jabra Panacast 40

Multi-Lens Video Conferencing Cameras: Feeds Stitched Together

The other reason you’d want a multi-lens video conferencing camera is to provide ultra-wide coverage in small conference rooms.

Why small rooms? In these, because there’s less space, people sit closer to the camera than in larger rooms. If people sit close to a camera, it’s very easy to cut off people sitting towards the edges of the group.

It’s basically the problem of trying to get your whole bachelor party in one selfie. Except, you know, without the bachelor party shenanigans.

Why not just use an ultra-wide lens? Distortion. Objects on the edges of an image taken using an ultra-wide lens get stretched out and curved. You’ve probably noticed this on shots using the ultra-wide lens on your smartphone.

So, if instead of using an ultra-wide lens, you can use multiple feeds from multiple lenses, you greatly reduce the amount of distortion.

The technology is something like the panoramic function on your smartphone’s camera. The smartphone is actually stitching together a whole bunch of photos into one ultra-wide, panoramic shot.

Jabra PanaCast 40, for example, can do the same thing with video feeds in real time, seamlessly. It uses two integrated cameras, which allows PanaCast 40 to produce 180° panoramic video with minimal distortion.

A single lens with 180° coverage would be a fisheye lens. You don’t want your meeting to look like a music video from the 1990’s, do you?

PanaCast 40 also uses its distortion-reduced feed for Dynamic Composition in Microsoft Teams, which is essentially the same thing as the Multi-Stream IntelliFrame and IntelliFocus features we discussed above. You get up to four individual close-ups plus a general overview from one camera.

Shop Video Conferencing at IP Phone Warehouse