0%

webrtc_code

chrome webrtc 源码架构分析

代码架构图:

代码流程图:

流程图:

连接流程相关

1 各个peer和信令服务器建立连接,通过信令服务器进行sdp协商,发送offer ,回复answer
2 各个peer和stun server(可以多个和独立), 获取各自对外IP, 通过stun收集自己的ice candidate,包括本机local地址,及对外ip等;
这里用的是binding request 和 binding response success
3 通过信令服务器交换各自的 candidate
4 各个peer收到对应的ice candidate后,排序后向各个candidate发起stun请求,用stun协议进行连通性测试,通过后择优。https://blog.csdn.net/MeRcy_PM/article/details/55806415
这个时候即 建立好p2pchannel
5 进行dtls握手,生成秘钥
6 进行srtp 媒体传输

webrtc中的相关代码底层机制:

SignalReadEvent机制分析:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
本文分析signalReadEvent机制和rtp包如何从网络传递到rtp_transport
在epoll收到包后,会触发socketDispatcher调用onEvent-->signalReadEvent(this),就结束了,那这里怎么触发去读数据呢?
看下signalReadEvent的实现:
// SignalReadEvent and SignalWriteEvent use multi_threaded_local to allow
// access concurrently from different thread.
// For example SignalReadEvent::connect will be called in AsyncUDPSocket ctor
// but at the same time the SocketDispatcher maybe signaling the read event.
// ready to read
sigslot::signal1<AsyncSocket*, sigslot::multi_threaded_local> SignalReadEvent;
// ready to write
sigslot::signal1<AsyncSocket*, sigslot::multi_threaded_local>
SignalWriteEvent;
可以看到这两个都是类似的,write和read

进一步回溯:
template <typename A1, typename mt_policy = SIGSLOT_DEFAULT_MT_POLICY>
using signal1 = signal_with_thread_policy<mt_policy, A1>;

template <class mt_policy, typename... Args>
class signal_with_thread_policy : public _signal_base<mt_policy> {
private:
typedef _signal_base<mt_policy> base;

protected:
typedef typename base::connections_list connections_list;

public:
signal_with_thread_policy() {}

template <class desttype>
void connect(desttype* pclass, void (desttype::*pmemfun)(Args...)) {
lock_block<mt_policy> lock(this);
this->m_connected_slots.push_back(_opaque_connection(pclass, pmemfun));//这里赋值,将下面的onReadPacket为pmemfunc
pclass->signal_connect(static_cast<_signal_base_interface*>(this)); // 这里实际怎么走的? 为什么要调用这个?
}

void emit(Args... args) { //接着调用这个 2
lock_block<mt_policy> lock(this);
this->m_current_iterator = this->m_connected_slots.begin();
while (this->m_current_iterator != this->m_connected_slots.end()) {//遍历调用每个connect上的;
_opaque_connection const& conn = *this->m_current_iterator;
++(this->m_current_iterator);
conn.emit<Args...>(args...); //接着调用这个 3 --这个是什么? 接着看:
}
}

void operator()(Args... args) { emit(args...); } //可以看到实际上调用的是这个函数 1,传入参数是this,就是SocketDispatcher
};

接上面的分析,conn.emit<Args...>(args...);实际调用什么函数:
先看这个结构:
protected:
connections_list m_connected_slots;
可以看到是在connect的时候push_back的:那实际上只需要知道什么时候调用connect就好了;
这里的SignalReadEvent是属于AsyncSocket的成员,所以这里需要对应的具体的socket,以下为分析:
AsyncUDPSocket::AsyncUDPSocket(AsyncSocket* socket) : socket_(socket) {
size_ = BUF_SIZE;
buf_ = new char[size_];

// The socket should start out readable but not writable.
socket_->SignalReadEvent.connect(this, &AsyncUDPSocket::OnReadEvent); //可以看到在构造的时候就进行了connect,传入的OnReadEvent应该就是
//在signalReadEvent(this)的时候调用的函数
socket_->SignalWriteEvent.connect(this, &AsyncUDPSocket::OnWriteEvent);
}
//分析:

template <class desttype>
void connect(desttype* pclass, void (desttype::*pmemfun)(Args...)) {
lock_block<mt_policy> lock(this);
this->m_connected_slots.push_back(_opaque_connection(pclass, pmemfun));//这里赋值,将下面的onReadPacket为pmemfunc
pclass->signal_connect(static_cast<_signal_base_interface*>(this)); // 将添加this-即signal1 到m_senders;
}

void emit(Args... args) { //接着调用这个 2
lock_block<mt_policy> lock(this);
this->m_current_iterator = this->m_connected_slots.begin();
while (this->m_current_iterator != this->m_connected_slots.end()) {
_opaque_connection const& conn = *this->m_current_iterator;
++(this->m_current_iterator);
conn.emit<Args...>(args...); //接着调用这个 3 --这个是什么? 接着看:
}
}
//几个结构;
protected:
connections_list m_connected_slots;
protected:
typedef std::list<_opaque_connection> connections_list;
class _opaque_connection {
private:
typedef void (*emit_t)(const _opaque_connection*);
template <typename FromT, typename ToT>
union union_caster {
FromT from;
ToT to;
};

emit_t pemit;
has_slots_interface* pdest;
// Pointers to member functions may be up to 16 bytes for virtual classes,
// so make sure we have enough space to store it.
unsigned char pmethod[16];

public:
template <typename DestT, typename... Args> //构造的传入的onReadPacket就是第二个,pm
_opaque_connection(DestT* pd, void (DestT::*pm)(Args...)) : pdest(pd) {
typedef void (DestT::*pm_t)(Args...);
static_assert(sizeof(pm_t) <= sizeof(pmethod),
"Size of slot function pointer too large.");

std::memcpy(pmethod, &pm, sizeof(pm_t));//pm被赋值到pmethod

typedef void (*em_t)(const _opaque_connection* self, Args...);
union_caster<em_t, emit_t> caster2;
caster2.from = &_opaque_connection::emitter<DestT, Args...>; //赋值,注意caster是union
pemit = caster2.to;
}

has_slots_interface* getdest() const { return pdest; }

_opaque_connection duplicate(has_slots_interface* newtarget) const {
_opaque_connection res = *this;
res.pdest = newtarget;
return res;
}

// Just calls the stored "emitter" function pointer stored at construction
// time.
template <typename... Args>
void emit(Args... args) const { //signalReadEvent最后会调用这个
typedef void (*em_t)(const _opaque_connection*, Args...);
union_caster<emit_t, em_t> caster;
caster.from = pemit;
(caster.to)(this, args...); //这个实际上调用emtter
}

private:
template <typename DestT, typename... Args>
static void emitter(const _opaque_connection* self, Args... args) {//实际调用这个
typedef void (DestT::*pm_t)(Args...);
pm_t pm;
std::memcpy(&pm, self->pmethod, sizeof(pm_t)); //把之前存的onReadPacket地址存到pm
(static_cast<DestT*>(self->pdest)->*(pm))(args...);//然后调用pm即onReadPacket
}
};
---------------------------------------------------signal_connect 机制:------------------------
signal_connect
class RTC_EXPORT AsyncPacketSocket : public sigslot::has_slots<> {
template <class mt_policy = SIGSLOT_DEFAULT_MT_POLICY>


class has_slots : public has_slots_interface, public mt_policy {
private:
typedef std::set<_signal_base_interface*> sender_set;
typedef sender_set::const_iterator const_iterator;

public:
has_slots()
: has_slots_interface(&has_slots::do_signal_connect,
&has_slots::do_signal_disconnect,
&has_slots::do_disconnect_all) {}

has_slots(has_slots const& o)
: has_slots_interface(&has_slots::do_signal_connect,
&has_slots::do_signal_disconnect,
&has_slots::do_disconnect_all) {
lock_block<mt_policy> lock(this);
for (auto* sender : o.m_senders) {
sender->slot_duplicate(&o, this);
m_senders.insert(sender);
}
}

~has_slots() { this->disconnect_all(); }

private:
has_slots& operator=(has_slots const&);

static void do_signal_connect(has_slots_interface* p, //实际上signal_connect调用到的函数
_signal_base_interface* sender) {
has_slots* const self = static_cast<has_slots*>(p);
lock_block<mt_policy> lock(self);
self->m_senders.insert(sender);
}

static void do_signal_disconnect(has_slots_interface* p,
_signal_base_interface* sender) {
has_slots* const self = static_cast<has_slots*>(p);
lock_block<mt_policy> lock(self);
self->m_senders.erase(sender);
}

static void do_disconnect_all(has_slots_interface* p) {
has_slots* const self = static_cast<has_slots*>(p);
lock_block<mt_policy> lock(self);
while (!self->m_senders.empty()) {
std::set<_signal_base_interface*> senders;
senders.swap(self->m_senders);
const_iterator it = senders.begin();
const_iterator itEnd = senders.end();

while (it != itEnd) {
_signal_base_interface* s = *it;
++it;
s->slot_disconnect(p);
}
}
}

private:
sender_set m_senders;
};
-------------------------------------------------
class has_slots_interface {
private:
typedef void (*signal_connect_t)(has_slots_interface* self,
_signal_base_interface* sender);
typedef void (*signal_disconnect_t)(has_slots_interface* self,
_signal_base_interface* sender);
typedef void (*disconnect_all_t)(has_slots_interface* self);

const signal_connect_t m_signal_connect;
const signal_disconnect_t m_signal_disconnect;
const disconnect_all_t m_disconnect_all;

protected:
has_slots_interface(signal_connect_t conn,
signal_disconnect_t disc,
disconnect_all_t disc_all)
: m_signal_connect(conn),
m_signal_disconnect(disc),
m_disconnect_all(disc_all) {}

// Doesn't really need to be virtual, but is for backwards compatibility
// (it was virtual in a previous version of sigslot).
virtual ~has_slots_interface() {}

public:
void signal_connect(_signal_base_interface* sender) {
m_signal_connect(this, sender);
}

void signal_disconnect(_signal_base_interface* sender) {
m_signal_disconnect(this, sender);
}

void disconnect_all() { m_disconnect_all(this); }
};
-------------------------------------------------signal_connect_end-------------------
------------------------------所以实际调用了:
void AsyncUDPSocket::OnReadEvent(AsyncSocket* socket) {
RTC_DCHECK(socket_.get() == socket);

SocketAddress remote_addr;
int64_t timestamp;
int len = socket_->RecvFrom(buf_, size_, &remote_addr, &timestamp);
if (len < 0) {
// An error here typically means we got an ICMP error in response to our
// send datagram, indicating the remote address was unreachable.
// When doing ICE, this kind of thing will often happen.
// TODO: Do something better like forwarding the error to the user.
SocketAddress local_addr = socket_->GetLocalAddress();
RTC_LOG(LS_INFO) << "AsyncUDPSocket[" << local_addr.ToSensitiveString()
<< "] receive failed with error " << socket_->GetError();
return;
}

// TODO: Make sure that we got all of the packet.
// If we did not, then we should resize our buffer to be large enough.
SignalReadPacket(this, buf_, static_cast<size_t>(len), remote_addr,
(timestamp > -1 ? timestamp : TimeMicros()));
}
--------------------------------

在webrtc代码中到处可见这种机制,所以这里简单分析了下。

AheadOf分析:

webrtc代码中也有很多AheadOf的调用,下面从使用来分析看具体含义

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
#include <type_traits>
#include <new>
#include <iostream>
#include <string>
#include <limits>
using namespace std;

template <typename T, T M>
inline typename std::enable_if<(M == 0), T>::type ForwardDiff(T a, T b) {
static_assert(std::is_unsigned<T>::value,
"Type must be an unsigned integer.");
return b - a;
}

template <typename T>
inline T ForwardDiff(T a, T b) {
return ForwardDiff<T, 0>(a, b);
}

template <typename T, T M>
inline typename std::enable_if<(M == 0), T>::type ReverseDiff(T a, T b) {
static_assert(std::is_unsigned<T>::value,
"Type must be an unsigned integer.");
return a - b;
}

template <typename T>
inline T ReverseDiff(T a, T b) {
return ReverseDiff<T, 0>(a, b);
}
// The minimum distance is defined as min(ForwardDiff(a, b), ReverseDiff(a, b))
template <typename T, T M = 0>
inline T MinDiff(T a, T b) {
static_assert(std::is_unsigned<T>::value,
"Type must be an unsigned integer.");
return std::min(ForwardDiff<T, M>(a, b), ReverseDiff<T, M>(a, b));
}

// Test if the sequence number |a| is ahead or at sequence number |b|.
//
// If |M| is an even number and the two sequence numbers are at max distance
// from each other, then the sequence number with the highest value is
// considered to be ahead.
template <typename T, T M>
inline typename std::enable_if<(M > 0), bool>::type AheadOrAt(T a, T b) {
static_assert(std::is_unsigned<T>::value,
"Type must be an unsigned integer.");
const T maxDist = M / 2;
if (!(M & 1) && MinDiff<T, M>(a, b) == maxDist)
return b < a;
return ForwardDiff<T, M>(b, a) <= maxDist;
}

//当a和b的差距大于等于max时,返回true,否则返回false
template <typename T, T M>
inline typename std::enable_if<(M == 0), bool>::type AheadOrAt(T a, T b) {
static_assert(std::is_unsigned<T>::value,
"Type must be an unsigned integer.");
const T maxDist = std::numeric_limits<T>::max() / 2 + T(1);
if (a - b == maxDist) //a>b and distance == maxDist
return b < a;
return ForwardDiff(b, a) < maxDist;//a-b<maxDist : if a>b && a-b < maxDist :true, if a<b ,then a-b<0,对u来说补码看差距, < maxDist, return true,
//1u-3u=-2u = max()-1, -1==max(); == max()-(3-1)+1
}

template <typename T>
inline bool AheadOrAt(T a, T b) {
return AheadOrAt<T, 0>(a, b);
}


/* a>b:
if a-b == max()/2+1; true
if a-b < max()/2+1:true
else false
*/
/* a<b: a-b<0
if a-b< max()/2+1: true ==> a-b == max()-(b-a)+1 < max()/2+1 true => b-a > max()/2+1 true
else false
*/
// Test if the sequence number |a| is ahead of sequence number |b|.
//
// If |M| is an even number and the two sequence numbers are at max distance
// from each other, then the sequence number with the highest value is
// considered to be ahead.
template <typename T, T M = 0>
inline bool AheadOf(T a, T b) {
static_assert(std::is_unsigned<T>::value,
"Type must be an unsigned integer.");
return a != b && AheadOrAt<T, M>(a, b);
}

#include<iostream>


int main()
{
/* a>b:
if a-b == max()/2+1; true
if a-b < max()/2+1:true
else false
*/
/* a<b: a-b<0 注意: max()==max()/2+max()/2+1因为unsigned max为奇数
if a-b< max()/2+1: true ==> a-b == max()-(b-a)+1 < max()/2+1 true => b-a > max()/2+1 true
else false
*/

//a > b : 5,2 max()/2+1+ 5 5 true | max()/2+6, 1 false
uint32_t maxt = std::numeric_limits<uint32_t>::max() / 2 + 1u;
cout << "maxt:" << maxt << endl ;
if(AheadOf(5u,2u))
{
std::cout << "5,2 " << 5u-2u << " true:" << endl;//in here
}
if(AheadOf(maxt+5u,5u))
{
std::cout << "maxt+5,5 " << maxt+5u-5u << " true" << endl;

}

if(AheadOf(maxt+6u,1u) == false)
{
std::cout <<"maxt+6u,1u " << maxt+6u-1u << " false" << endl;
}

cout << "------------------" << endl;


//a < b : 3,max()/2+1+6 true | 4,max()/2+1+3 false 4,max()/2+1+4 false
if(AheadOf(3u,maxt+6u))
{
std::cout << "3,maxt+6: "<< (maxt+6u)-3u << " true" << endl;
}

if(AheadOf(4u,maxt+4u) == false)
{
std::cout << " 4,maxt+4: " << (maxt+4u) -4u << " false" << endl;
}
if(AheadOf(4u,maxt+3u) == false)
{
std::cout << "4,maxt+3 : " << (maxt+3u)-4u << " false" << endl;
}

uint32_t maxtt = std::numeric_limits<uint32_t>::max();
cout << "maxtt:" << maxtt << endl;
return 0;

}

相关业务流程分析:

Rtp接收音频:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
     webrtc::RtpTransport::DemuxPacket(rtc::CopyOnWriteBuffer * packet, __int64 packet_time_us) 行202    
webrtc::RtpDemuxer::OnRtpPacket(const webrtc::RtpPacketReceived & packet) 行158
void BaseChannel::OnRtpPacket(const webrtc::RtpPacketReceived& parsed_packet) {
media_channel_->OnPacketReceived(
cricket::BaseChannel::OnPacketReceived(bool rtcp, const rtc::CopyOnWriteBuffer & packet, __int64 packet_time_us) 行506
void WebRtcVideoChannel::OnPacketReceived(rtc::CopyOnWriteBuffer packet, third_party/webrtc/media/engine/webrtc_video_engine.cc


反向分析:
webrtc::PacketBuffer::InsertPacketList(std::list<webrtc::Packet,std::allocator<webrtc::Packet> > * packet_list, const webrtc::DecoderDatabase & decoder_database, absl::optional<unsigned char> * current_rtp_payload_type, absl::optional<unsigned char> * current_cng_rtp_payload_type, webrtc::StatisticsCalculator * stats) 行139
webrtc::NetEqImpl::InsertPacketInternal(const webrtc::RTPHeader & rtp_header, rtc::ArrayView<unsigned char const ,-4711> payload, unsigned int receive_timestamp) 行712 将数据加到packet_buffer_数据包队列中,待解码
webrtc::NetEqImpl::InsertPacket(const webrtc::RTPHeader & rtp_header, rtc::ArrayView<unsigned char const ,-4711> payload, unsigned int receive_timestamp) 行148 third_party/webrtc/modules/audio_coding/neteq/neteq_impl.cc
webrtc::acm2::AcmReceiver::InsertPacket(const webrtc::WebRtcRTPHeader & rtp_header, rtc::ArrayView<unsigned char const ,-4711> incoming_payload) 行110

if (neteq_->InsertPacket(rtp_header, incoming_payload) < 0) {

webrtc::`anonymous namespace'::AudioCodingModuleImpl::IncomingPacket(const unsigned char * incoming_payload, const unsigned int payload_length, const webrtc::WebRtcRTPHeader & rtp_header) 行811
webrtc::voe::`anonymous namespace'::ChannelReceive::OnReceivedPayloadData(const unsigned char * payloadData, unsigned int payloadSize, const webrtc::WebRtcRTPHeader * rtpHeader) 行289
webrtc::voe::`anonymous namespace'::ChannelReceive::ReceivePacket(const unsigned char * packet, unsigned int packet_length, const webrtc::RTPHeader & header) 行675 third_party/webrtc/audio/channel_receive.cc
webrtc::voe::`anonymous namespace'::ChannelReceive::OnRtpPacket(const webrtc::RtpPacketReceived & packet) 行624
webrtc::RtpDemuxer::OnRtpPacket(const webrtc::RtpPacketReceived & packet) 行158
bool RtpDemuxer::OnRtpPacket(const RtpPacketReceived& packet) {
RtpPacketSinkInterface* sink = ResolveSink(packet);
if (sink != nullptr) {
sink->OnRtpPacket(packet);
return true;
}
return false;
}

webrtc::RtpStreamReceiverController::OnRtpPacket(const webrtc::RtpPacketReceived & packet) 行54
webrtc::internal::Call::DeliverRtp(webrtc::MediaType media_type, rtc::CopyOnWriteBuffer packet, __int64 packet_time_us) 行1318
bool RtpStreamReceiverController::OnRtpPacket(const RtpPacketReceived& packet) {
RTC_DCHECK_RUN_ON(&demuxer_sequence_);
return demuxer_.OnRtpPacket(packet);
}
webrtc::internal::Call::DeliverPacket(webrtc::MediaType media_type, rtc::CopyOnWriteBuffer packet, __int64 packet_time_us) 行1356
cricket::WebRtcVoiceMediaChannel::OnPacketReceived(rtc::CopyOnWriteBuffer * packet, __int64 packet_time_us) 行2057





反向分析:进行解码:
opus_decode(OpusDecoder * st, const unsigned char * data, int len, short * pcm, int frame_size, int decode_fec) 行766
DecodeNative(WebRtcOpusDecInst * inst, const unsigned char * encoded, unsigned int encoded_bytes, int frame_size, short * decoded, short * audio_type, int decode_fec) 行341
WebRtcOpus_Decode(WebRtcOpusDecInst * inst, const unsigned char * encoded, unsigned int encoded_bytes, short * decoded, short * audio_type) 行361
webrtc::AudioDecoderOpusImpl::DecodeInternal(const unsigned char * encoded, unsigned int encoded_len, int sample_rate_hz, short * decoded, webrtc::AudioDecoder::SpeechType * speech_type) 行126
webrtc::AudioDecoder::Decode(const unsigned char * encoded, unsigned int encoded_len, int sample_rate_hz, unsigned int max_decoded_bytes, short * decoded, webrtc::AudioDecoder::SpeechType * speech_type) 行98
webrtc::`anonymous namespace'::OpusFrame::Decode(rtc::ArrayView<short,-4711> decoded) 行54

webrtc::NetEqImpl::DecodeLoop(std::list<webrtc::Packet,std::allocator<webrtc::Packet> > * packet_list, const webrtc::Operations & operation, webrtc::AudioDecoder * decoder, int * decoded_length, webrtc::AudioDecoder::SpeechType * speech_type) 行1445
auto opt_result = packet_list->front().frame->Decode(
webrtc::NetEqImpl::Decode(std::list<webrtc::Packet,std::allocator<webrtc::Packet> > * packet_list, webrtc::Operations * operation, int * decoded_length, webrtc::AudioDecoder::SpeechType * speech_type) 行1356
webrtc::NetEqImpl::GetAudioInternal(webrtc::AudioFrame * audio_frame, bool * muted, absl::optional<enum webrtc::Operations> action_override) 行846 从GetDecision拿到数据包进行解码
webrtc::NetEqImpl::GetAudio(webrtc::AudioFrame * audio_frame, bool * muted, absl::optional<enum webrtc::Operations> action_override) 行211
webrtc::acm2::AcmReceiver::GetAudio(int desired_freq_hz, webrtc::AudioFrame * audio_frame, bool * muted) 行127
webrtc::`anonymous namespace'::AudioCodingModuleImpl::PlayoutData10Ms(int desired_freq_hz, webrtc::AudioFrame * audio_frame, bool * muted) 行840
webrtc::voe::`anonymous namespace'::ChannelReceive::GetAudioFrameWithInfo(int sample_rate_hz, webrtc::AudioFrame * audio_frame) 行341

webrtc::AudioMixerImpl::GetAudioFromSources() 行190
webrtc::AudioMixerImpl::Mix(unsigned int number_of_channels, webrtc::AudioFrame * audio_frame_for_mixing) 行129
frame_combiner_.Combine(GetAudioFromSources(output_frequency), //募集各种源的音频,会做混音
number_of_channels, output_frequency,
number_of_streams, audio_frame_for_mixing);
webrtc::AudioTransportImpl::NeedMorePlayData(const unsigned int nSamples, const unsigned int nBytesPerSample, const unsigned int nChannels, const unsigned int samplesPerSec, void * audioSamples, unsigned int & nSamplesOut, __int64 * elapsed_time_ms, __int64 * ntp_time_ms) 行214
webrtc::AudioDeviceBuffer::RequestPlayoutData(unsigned int samples_per_channel) 行304


//以下分设备:third_party/webrtc/modules/audio_device/linux/audio_device_pulse_linux.cc /...
webrtc::AudioDeviceWindowsCore::DoRenderThread() 行2976
webrtc::AudioDeviceWindowsCore::WSAPIRenderThread(void * context) 行2778 渲染音频数据线程,取音频数据包进行解码播放
对linux:
AudioDeviceLinuxPulse::PlayThreadProcess() { third_party/webrtc/modules/audio_device/linux/audio_device_pulse_linux.cc


混音:
在audioMixerImpl:
Mix: frame_combiner_.Combine(GetAudioFromSources(output_frequency),
third_party/webrtc/modules/audio_mixer/frame_combiner.cc
void FrameCombiner::Combine(rtc::ArrayView<AudioFrame* const> mix_list,

third_party/webrtc/modules/audio_mixer/audio_frame_manipulator.cc
void RemixFrame(size_t target_number_of_channels, AudioFrame* frame) {
mixer.Transform(frame);


third_party/webrtc/audio/utility/channel_mixer.cc
void ChannelMixer::Transform(AudioFrame* frame) { 混音

相关重要概念:

关键帧请求,FIR,PLI区别:

  • PLI 是Picture Loss Indication
    图片丢失提示消息表明突发性的丢包影响到了一个或多个帧中的多个包。发送方可以通过重传这些包或者生成一个新的I帧以作出回应。但一般来说,PLI同时表现得像一个NACK和一个FIR,因此,通过使用PLI,接收端为发送端如何对该请求作出响应提供了更大的灵活度

  • SLI 是Slice Loss Indication。
    切片丢失提示消息表明该包丢失影响到单个帧的部分(即,多个macroblock)。因此,当发送端接收到SLI消息时,它可以通过重新编码的方式纠正切片,停止部分帧解码错误的传播。

发送方接收到接收方反馈的PLI或SLI需要重新让编码器生成关键帧并发送给接收端。

  • FIR 是Full Intra Request
    视频在WebRTC的会话中总是以一个I帧开始,然后发送P帧。但是,当有新的参与者中途加入会议会话时,很有可能接收到一系列P帧,但因缺少相应的I帧,它并不能解码。这种情况下,该接收端会发送一个FIR以请求一个I帧。

下文

连接相关机制:
  1. webrtc,ice和stun
    https://rfc2cn.com/rfc8445.html
  2. webrtc,sdp
    https://rfc2cn.com/rfc4566.html
  3. webrtc,dtls握手
    http://www.rfc2cn.com/rfc6347.html
    传输策略机制:在具体机制章节统一分析。
  4. webrtc,接收方向中的各个队列
  5. webrtc,jitter机制
  6. webrtc,fec机制
  7. webrtc,nack机制