The goal of this issue is to group information/observations to support multi-stream in video conferences. This means to have a conference with several participants active and/or one participant with multiple videos as the active participant.
So will it be possible to also have multiple cameras at once for a 1:1 video call using just Jami?
(Without the option that I just use a virtual webcam like OBS-VirtualCam)
Can you implement this feature for 1:1 video calls too?
This is not the scope of this issue and probably no. With swarm and multi swarm all call will be conferences and 1:1 will not make sense at one point (except SIP accounts)
you can just do a rdv with yourself today. It will be a conference alone.
So this will support multistream (for example facecam + Screen sharing at the same time)?
1:1 calls are not a conference however
I thought a conference / rendevouz is just a video call having more than 2 participants? Is there any documentation whats the difference between a 1:1 video call and a conference alone?
Is it possible to have a group video call WITHOUT having a conference at the same time?
And in the near future: If Multi-Swarm supports audio and video calls (with one or more participants) how can I create a Multi-Swarm-Conference?
So that all participants can use group text-chat AND group video calling at the same time?
Can you / the team add this informations to the docs?
So this will support multistream (for example facecam + Screen sharing at the same time)?
Yes
I though...
A conference is a call where a host is mixing the video. So that you will have a grid of participants with informations on the layout.
If you call a rendezvous account or if you are added to a current video conference, it will not be a 1:1.
If you see your preview at the top (on client-qt) you are in a 1:1 call. All docs is in the wiki.
Multi-Swarm-Conference
For now calls in swarm with multiple participants is not implemented and all is already in the issues and the docs (still the wiki)
So you can talk using the audio input of your webcam and hear the internal sound of your presentation / YouTube video and even use a second webcam where you hear for example animal sounds?
This is for changing an internal protocol, not the rest, 1189 is not related to this, but yeah ideally at some point multiple audio will come (probably not this year at least)
What happens / does it matter if the gpu/cpu usage of each video (?) is completely different?:
** one (other) example: I use the share screen option to show video games? (Can you also test this feature using the screen share option to show video games?)
** If I use other active videos like real webcams and / or hdmi capture cards at the same time?
Android: avoid too many (unused) windows jami-client-android#922 (closed) if you tap buttons like Home, Back, and App Overview and/ or if you use other apps
What happens / does it matter if the gpu/cpu usage of each video (?) is completely different?
-> ? I don't understand what you are asking. Anyway your driver generally don't support much encoding at the same time.
I use the share screen option to show video games?
It's not a question.
Can one participant move each active video?
This is not separate active videos, the youtube video is only showing one scene. And yeah I think Jami is just out of scope and we can't maintain such a project. For this kind, OBS or any virtual camera will be waaaaaaaaaay better and maintained. It's a full time project for a big team.
But this: jami-client-qt#500 is possible since video-split and not related to multi-stream.
I actually meant that I if I use facecam + share screen at the same time (just using Multistream of Jami; without any virtual camera) that I can drag the video window of the facecam to where I want (left/ right / in the middle of the screen for example).
It's a closed ticket
I know. But if you use multiple windows (facecam + screenshare) I think it will be important again? Or at least to keep in mind you can tap buttons like Home, Back, and App Overview.
It will just be 2 completely separate videos surface ; like 2 participants in the same conf. Not one surface with 2 videos. Managing one video surface with multiple videos is a completely different logic/project. It would be a big team full time project.
client-android 922
Is pretty unrelated to this too, as I said it's about an internal protocol