News from Industry

ClueCon Weekly May 6th, 2015!

FreeSWITCH - Wed, 05/06/2015 - 21:03

Check out the weekly conference call to see the latest news!

Kamailio World 2015 – Grants for Students

miconda - Tue, 05/05/2015 - 14:17
Given the roots and the tight relation of Kamailio project with the academic environment, we are offering three seats at Kamailio World Conference, May 27029, 2015, in Berlin, to students enrolled in universities or research institutes (both bachelor and PhD programs qualify).Last year we tried it locally with the universities in Berlin and this year we want to extend it, as there might be young people interested to travel a bit and attend the event.If you are a student and want to participate, email to registration@kamailio.org . Participation to
all the content of the event (workshops, conference and social event) is free, but you will have to take care of expenses for traveling and accommodation. Write a short description about your interest in real time communications and what is the university or the research institute you are affiliate to.Also, if you are not a student, but you are in touch with some or have access to students forums/mailing lists, it will be very appreciated if you forward these details.More information about Kamailio World is available on the web site:Looking forward to meeting many of you in just few weeks in Berlin!

Simulating NAT with two Linux boxes

TXLAB - Tue, 04/28/2015 - 17:20

I needed to test some master-slave software in a situation that the master communicated to the slave over NAT (master’s IP address was replaced with the firewall’s external address), and then NAT would be removed, keeping master and slave addresses the same, but the slave would see the master directly.

This is the test scenario that worked on my desk, without having to add any routing to the LAN.

atom02 is the computer that emulates the slave system. It is connected back-to-back to alix102, and has only one IP address to communicate to:

ip link set dev eth0 up ip addr add 192.168.1.50/31 dev eth0

alix102 is a Linux box with multiple Ethernet ports: eth0 is connected to my home LAN and has a DHCP address 192.168.1.142/24. Also eth1 (192.168.1.51/31) is connected directly to atom02.

The following configuration makes alix102 answer to ARP requests for 192.168.1.50 and forward packets to atom02, replacing the source address with 192.168.1.51. Also atom02 can make an SSH connection to 192.168.1.51:3022 and it will be connected to another box in the LAN that emulates the software master (192.168.1.147:22).

# enable IP forwarding echo 1 > /proc/sys/net/ipv4/ip_forward # Bring up eth1 ip link set dev eth1 up ip addr add 192.168.1.51/31 dev eth1 # Enable proxy ARP on eth0 echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp # Set up the NAT translation iptables -t nat -A POSTROUTING -o eth1 -j SNAT --to 192.168.1.51 iptables -t nat -A PREROUTING -p tcp --dport 3022 -i eth1 -j DNAT --to 192.168.1.147:22

After that, atom02 can be re-connected directly into the LAN, keeping the address 192.168.1.50 with /24 network mask, and the software can be tested with direct communication. Alix102 has to be disconnected from the LAN, so that it does not pollute it with proxy ARP responses.


Filed under: Networking Tagged: linux

Linux reboot freezes on Acer Aspire One

TXLAB - Tue, 04/28/2015 - 11:16

I needed to install CentOS 6 on one an old Acer Aspire One notebook (with Intel Atom CPU) for some software testing. The problem is, that it could not perform a reboot, and I needed to press the power button every time. These instructions for reboot=X parameter for kernel did not help at all.

What really helped, is `kernel-ml` package from elrepo.org repositories. At the moment of writing, it was version `4.0.0-1.el6.elrepo.x86_64`.

Keep in mind that after installing kernnel-ml package, you need to edit /etc/grub.conf and make this new kernel as default. No additional boot options are required.


Filed under: Hardware Tagged: linux

Kamailio World 2015 – The Schedule

miconda - Mon, 04/27/2015 - 17:00
It is one month till the start of Kamailio World 2015, time has passed very fast since we announced the event, accelerated by April filled with many public holidays.The first draft of the schedule is now available, as usual, expect many speakers to tune the content of their presentation along the way to the day of the talk, to surprise the audience with challenging concepts and visions.At this edition we were pleasely surprised by the number of submissions, but it was an extremely hard task to select the sessions. To accommodate as much as possible, we are introducing the lightning talks, two of 10 minutes each, to give the opportunity to present shortly about interesting ideas or updates of applications used in Kamailio and VoIP eco-system.The two days of conference are filled with 28 sessions, the event being completed with 5 technical workshopsduring the pre-conference day. Several exhibitors will be available during the conference days with showcases of their products or solutions, ready with many demos on site.We are very grateful to our sponsors, which made possible to bring again a consistent number of speakers, ensuring first class quality content for the entire event.In about 4 weeks, we will be ready to welcome you in the beautiful city center of Berlin. Don’t miss the opportunity to attend this event, it is unique across Europe, bringing open source and real time communications industry together, bridging flexibility innovation with telecommunication businesses. It is now the right time to register!See you in Berlin at the end of May!

Kamailio v4.3 – Development Frozen

miconda - Thu, 04/23/2015 - 11:10
The development (aka master) branch of Kamailio now enter the pre-release phase for version 4.3.0. No new features are allowed to be pushed to GIT master branch until we create a dedicated branch for 4.3 (expected to be in about 4 weeks or so).The focus moves now on testing the code, to get it in a stable, rock solid state at the time of release. We hope to get many people from the community involved in testing. If you want to get involved and need assistance about what and how to do it, please don’t hesitate to write to mailing lists. The first step is to get Kamailio installed from sources, details at:Stay tuned for updates to the wiki pages with guidelines for migration from 4.2 to 4.3 as well as what is new in version 4.3.The release of v4.3.0 is expected to be out few weeks after Kamailio World Conference — more or less about mid of June 2015.

WebRTC Meetup – Vancouver

webrtc.is - Thu, 04/23/2015 - 06:12

Vancouver is one of the hotbeds for IP communication technology and is home to many developers. With the advent of WebRTC, integration of voice and video chat into almost any application is within reach but as always, there are always pitfalls. Sounds like a great reason to start a WebRTC meetup in Vancouver!

As of today Vancouver now has its own WebRTC meetup group. If you are interested in linking up and talking to like-minded RTC geeks implementing real time comm using WebRTC please join and let’s get together. We will also be looking for meetup facilities & sponsors (snacks, drinks etc.).

I am thinking our first meetup will be in May sometime, not sure on exact dates yet.

Agenda and topic for the first meeting is wide open. Topics like, “WebRTC 101″ or “Dos and Don’ts” come to mind, but we can decide on that when we have heard from some active members.

We will also be bringing in some live guests from time to time via what else, WebRTC!

Hope to see you soon!

/Erik


What’s up with WhatsApp and WebRTC?

webrtchacks - Wed, 04/22/2015 - 17:14

One of our first posts was a Wireshark analysis of Amazon’s Mayday service to see if it was actually using WebRTC. In the very early days of WebRTC, verifying a major deployment like this was an important milestone for the WebRTC community. More recently, Philipp Hancke – aka Fippo – did several great posts analyzing Google Hangouts and Mozilla’s Hello service in Firefox. These analyses validate that WebRTC can be successfully deployed by major companies at scale. They also provide valuable insight for developers and architects on how to build a WebRTC service.

These posts are awesome and of course we want more.

I am happy to say many more are coming. In an effort to disseminate factual information about WebRTC, Google’s WebRTC team has asked &yet – Fippo’s employer – to write a series of publicly available, in-depth, reverse engineering and trace analysis reports. Philipp has agreed to write summary posts outlining the findings and implications for the WebRTC community here at webrtcHacks. This analysis is very time consuming. Making it consumable for a broad audience is even more intensive, so webrtcHacks is happy to help with this effort in our usual impartial, non-commercial fashion.

Please see below for Fippo’s deconstruction of WhatsApp voice calling.

{“editor”: “chad“}

 

Philipp Hancke deconstructs WhatsApp to search for WebRTC

 

After some rumors (e.g. on TechCrunch), WhatsApp recently launched voice calls for Android. This spurred some interest in the WebRTC world with the usual suspects like Tsahi Levent-Levi chiming in and starting a heated debate. Unfortunately, the comment box on Tsahi’s BlogGeek.Me blog was too narrow for my comments so I came back here to webrtchacks.

At that point, I had considered doing an analysis of some mobile services already and, thanks to support from the Google WebRTC team, I was able to spend a number of days looking at Wireshark traces from WhatsApp in a variety of scenarios.

Initially, I was merely trying to validate the capture setup (to be explained in a future blog post) but it turned out that there is quite a lot of interesting information here and even some lessons for WebRTC. So I ended up writing a full fifteen page report which you can get here. It is a long story of packets (available for download here) which will be very boring if you are not an engineer so let me try to summarize the key points here.

Summary

WhatsApp is using the PJSIP library to implement Voice over IP (VoIP) functionality. The captures shows no signs of DTLS, which suggests the use of SDES encryption (see here for Victor’s past post on this).  Even though STUN is used, the binding requests do not contain ICE-specific attributes. RTP and RTCP are multiplexed on the same port.

The audio codec can not be fully determined. The sampling rate is 16kHz, the codec bandwidth of about 20kbit/s and the bandwidth was the same when muted.

An inspection of the binary using the strings tool shows both PJSIP and several strings hinting at the use of elements from the webrtc.org voice engine such as the acoustic echo cancellation (AEC), AECM, gain control (AGC), noise suppression and the high-pass filter.

Comparison with WebRTC  Feature WebRTC/RTCWeb Specifications WhatsApp SDES MUST NOT offer SDES probably uses SDES ICE RFC 5245 no ICE, STUN connectivity checks TURN usage used as last resort uses a similar mechanism first Audio codec Opus or G.711 unclear, 16khz with 20kbps bitrate Switching from a relayed session to a p2p session

The most impressive thing I found is the optimization for a fast call setup by using a relay initially and then switching to a peer-to-peer session. This also opens up the possibility for a future multi-party VoIP call which would certainly be supported by this architecture. The relay server is called “conf bridge” in the binary.

Lets look at the first session to illustrate this (see the PDF for the full, lengthy description):

  1. The session kicks off (in packet #70) by sending TURN ALLOCATE requests to eight different servers. This request doesn’t use any standard STUN attributes which is easy to miss.
  2. After getting a response the client is exchanging some signaling traffic with the signaling server, so this is basically gathering a relayed candidate and sending an offer to the peer.
  3. Packet #132 shows the client sending something to one of those TURN servers. This turns out to be an RTCP packet, followed by some RTP packets, which can be seen by using Wiresharks “decode as” functionality. This is somewhat unusual and misleading, as it is not using standard TURN functionality like send or data indications. Instead, it just does raw RTP on that.
  4. Packet #146 shows the first RTP packet from the peer. For about three seconds, the RTP traffic is relayed over this server.
  5. In the mean time, packet #294 shows the client sending a STUN binding request to the peer’s public IP address. Using a filter (ip.addr eq 172.16.42.124 and ip.addr eq 83.209.197.82) and (udp.port eq 45395 and udp.port eq 35574)  clearly shows this traffic.
  6. The first response is received in packet #300.
  7. Now something really interesting happens. The client switches the destination of the RTP stream between packets #298 and #305. By decoding those as RTP we can see that the RTP sequence number increases just by one. See this screenshot:

Now, if we have decoded everything as RTP (which is something Wireshark doesn’t get right by default so it needs a little help), we can change the filter to rtp.ssrc == 0x0088a82d  and see this clearly. The intent here is to try a connection that is almost guaranteed to work first (I used a similar rationale in the minimal viable SDP post recently even) and then switch to a peer-to-peer connection in order to minimize the load on the TURN servers.

Wow, that is pretty slick. It likely reduces the call setup time the user perceives. Let me repeat that: this is a hack which makes the user experience better!

By how much is hard to quantify. Only a large-scale measurement of both this approach and the standard approach can answer that.

Lessons for WebRTC

In WebRTC, we can do something similar, but it is a little more effort right now. We can setup the call with iceTransports: ‘relay’ which will skip host and server-reflexive candidates. Also, using a relay helps to guarantee the connetion will work (in conditions where WebRTC will work at all).

There are some drawbacks to this approach in terms of round-trip-times due to TURN’s permission mechanism. Basically when creating a TURN-relayed candidate the following happens (in Chrome; Firefox’s behavior differs slightly):

  1. Chrome tries to create an allocation without authentication
  2. the TURN server asks for authentication
  3. Chrome retries to create an allocation with authentication
  4. the TURN server tells chrome the address and port of the candidate.
  5. Chrome signals the candidate to the JavaScript layer via the onicecandidate callback. That is two full round-trip times.
  6. after adding a remote candidate, Chrome will create a TURN permission on the server before the server will relay traffic from the peer. This is a security mechanism described here.
  7. now STUN binding requests can happen over the relayed address. This uses TURN send and data indications. These add the peer’s address and port to each packet received.
  8. when agreeing on a candidate, Chrome creates a TURN channel for the peer’s address which is more efficient in terms of overhead.

Compared to this, the proprietary mechanism used by Whatsapp saves a number of roundtrips.

this is a hack which makes the user experience better!

If we started with just relay candidates, then, since this hides the IP addresses of the parties involved from each other, we might even establish the relayed connection and do the DTLS handshake before the callee accepts the call. This is known as transport warmup, it reduces the perceived time until media starts flowing.

Once the relayed connection is established, we can call setConfiguration (formerly known as updateIce; which is currently not implemented) to remove the restriction to relay candidates and do an ICE restart by calling createOffer again with the iceRestart flag set to true. This would trigger an ICE restart which might determine that a P2P connection can be established.

Despite updateIce not being implemented, we can still switch from a relay to peer-to-peer today. ICE restarts work in Chrome so the only bit we’re missing is the iceTransports ‘relay’ which just generates relay candidates. Now the same effect can be simulated in Javascript by dropping any non-relay candidates during the first iteration. It was pretty easy to implement this behaviour in my favorite sdp munging sample. The switch from relayed to P2P just works. The code is committed here.

While ICE restart is inefficient currently, the actual media switch (which is hard) happens very seamlessly.

 

In my humble opinion

Whatsapp’s usage of STUN and RTP seems a little out of date. Arguably, the way STUN is used is very straightforward and makes things like implementing the switch from relayed calls to P2P mode easier. But ICE provides methods to accomplish the same thing, in a more robust way. Using a custom TURN-like functionality that delivers raw RTP from the conference bridge saves some bytes’ overhead for TURN channels, but that overhead is typically negligible.

Not using DTLS-SRTP with ciphers capable of perfect forward secrecy is a pretty big issue in terms of privacy. SDES is known to have drawbacks and can be decrypted retroactively if the key (which is transmitted via the signaling server) is known. Note that the signaling exchange might still be protected the same way it is done for text messages.

In terms of user experience, the mid-call blocking of P2P showed that this scenario had been considered which shows quite some thought. Echo cancellation is a serious problem though. The webrtc.org echo cancellation is capable of a much better job and seems to be included in the binary already. Maybe the team there would even offer their help in exchange for an acknowledgement… or awesome chocolate.

 

{“author”: “Philipp Hancke“}

Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart@reidstidolph, @victorpascual and @tsahil.

The post What’s up with WhatsApp and WebRTC? appeared first on webrtcHacks.

The WebRTC Troubleshooter: test.webrtc.org

webrtchacks - Mon, 04/20/2015 - 09:00

WebRTC-based services are seeing new and larger deployments every week. One of the challenges I’m personally facing is troubleshooting as many different problems might occur (network, device, components…) and it’s not always easy to get useful diagnostic data from users.

troubleshooting (Image source: google)

Earlier this week, Tsahi, Chad and I participated at the WebRTC Global Summit in London and had the chance to catch up with some friends from Google, who publicly announced the launch of test.webrtc.org. This is great diagnostic tool but, to me, the best thing is that it can be easily integrated into your own applications; in fact, we are already integrating this in some of our WebRTC apps.

Sam, André and Christoffer from Google are providing here a brief description of the tool. Enjoy it and happy troubleshooting!

{“intro-by”: “victor“}

The WebRTC Troubleshooter: test.webrtc.org (by Google) Why did we decide to build this?

We have spent countless hours debugging things when a bug report comes in for a real-time application. Besides the application itself, there are many other components (audio, video, network) that can and will eventually go wrong due to the huge diversity among users’ system configurations.

By running small tests targeted at each component we hoped to identify issues and create the possibility to gather information on the system reducing the need for round-trips between developers and users to resolve bug reports.

Test with audio problem


What did we build?

It was important to be able to run this diagnostic tool without installing any software and ideally one should be able to integrate very closely with an application, thus making it possible to clearly identify bugs in an application from the components that power it.

To accomplish this, we created a collection of tests that verify basic real-time functionality from within a web page: video capture, audio capture, connectivity, network limitations, stats on encode time, supported resolutions, etc… See details here. 

We then bundled the tests on a web page that enables the user to download a report, or make it available via a URL that can be shared with developers looking into the issue.

How can you use it?

Take a look at test.webrtc.org and find out what tests you could incorporate in your app to help detect or diagnose user issues. For example, simple tests to distinguish application failures from system components failures, or more complex tests such as detecting if the camera is delivering frozen frames, or tell the user that their network signal quality is weak. 

https://webrtchacks.com/wp-content/uploads/2015/04/test.webrtc.org_.mp4

You are encouraged by us to take ideas and code from GitHub and integrate similar functionality in your own UX. Using test.webrtc.org should be part of any “support request” flow for real-time applications. We encourage developers to contribute! 

In particular we’d love some help getting a uniform getStats API between browsers.

test.webrtc.org repo

What’s next?

Working on adding more tests (e.g. network analysis detecting issues that affect audio and video performance is on the way).

We want to learn how developers integrate our tests into their apps and we want to make them easier to use!

{“authors”: [“Sam“, “André“, “Christoffer”]}

Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart@reidstidolph, @victorpascual and @tsahil.

The post The WebRTC Troubleshooter: test.webrtc.org appeared first on webrtcHacks.

3CX è sponsor al Microsoft Ignite 2015!

Libera il VoIP - Thu, 04/16/2015 - 16:12

3CX è Silver Sponsor al Microsoft Ignite 2015, che si terrà a Chicago dal 4 all’8 Maggio.

Il focus principale del Microsoft Ignite di quest’anno è la tecnologia Cloud, la Unified Communication e la Mobilità: in pratica è su misura per 3CX! Addetti ai lavori, esperti e opinion leaders parteciperanno all’evento, quindi iscrivetevi e partecipate ai lavori.

Durante tutti i giorni della conferenza verranno effettuate dimostrazioni live di 3CX Phone System e della nostra soluzione integrata di webconference, 3CX WebMeeting, basata su tecnologia WebRTC.

Venite ad incontrare il team 3CX USA e il CEO di 3CX Nick Galea allo stand #307

Per evitare contrattempi o sovrapposizioni, siete pregati di fissare un appuntamento via e-mail

Non vediamo l’ora di incontrarvi di persona al Microsoft Ignite 2015!

Approfondimenti
  • Anche Microsoft entra in campo

    Dopo AOL, Google, Yahoo ecc ecc anche il colosso di Redmont entra nel mercato della fonia over ip, e lo fa sviluppando, in collaborazione con importanti produttori hardware, una soluzione pensata per [...]

  • Response Point: Microsoft abbandona il voip

    Response Point: doveva essere il cavallo di razza attraverso il quale espandere la propria “leadership” anche al settore della fonia over ip. A quanto pare però l’esperienza di Microsoft si può già [...]

  • TellMe by Microsoft: motore di ricerca vocale per BlackBerry

    Con un “colpo di scena inaspettato” (almeno per me) TellMe, azienda recentemente acquisita da Microsoft, ha lanciato un nuovo applicativo per piattaforma RIM che permette di effettuare ricerche attraverso comandi vocali.

    Il funzionamento [...]

Kamailio World 2015 – The Workshops

miconda - Wed, 04/15/2015 - 23:57
It is now about one month and a half till the start of Kamailio World Conference 2015. Continuing with the same event structure like in 2014, the afternoon of the first day, the 27th of May, is filled with several technical workshops. These sessions are intended to give a more hands-on perspective on the subjects, with deeper technical content.Last year, Sipwise showed how to deploy sip:provide CE – the open source out of the box IP Telephony Operator Platform – in a matter of minutes and customize it to fit better your needs. This year, Daniel Grotti, a long time SIP and Kamailio fellow, is going to show how to enable WebRTC for sip:provider CE in order to bridge the communication between the web world and the classic SIP phones. Few other typical use cases will be approached during the session.Carsten Bock, from NG Voice, is returning with another tutorial to show more of what can be done with Kamailio for IMS and VoLTE deployments. Besides the tutorial, the plan is to have a VoLTE testbed on site for the duration of the entire event, so the participants can test with their own devices.After presenting at the past editions the concept and the development of CGRateS, a carrier grade open source CDR rating engine, Dan Bogos is now coming with a hands-on session about how to integrate it with Kamailio for prepaid and postpaid billing.Ability to troubleshoot SIP routing and analyze the flows on the wire is one of the core elements required for VoIP engineers. Lorenzo Mangani, one of the co-founders of Homer SIP Capture project, is going to deliver a session on how to use existing open source tools (including Homer and sipgrep, but not limited to) to make the SIP troubleshooting process easier.All together are providing an amazing amount of knowledge from the people with first hand experience, those that built the systems. It is a unique opportunity at Kamailio World to get face to face to interact with such people.The content of conference days is filled with other very interesting sessions, including as well valuable technical details, presenting scalable and secure architectures or other products that can be used to complete the VoIP platforms with new features. Right now you can see details for a sections of presentations in the Schedule page.Be sure you don’t miss Kamailio World Conference 2015, during May 27-29, in Berlin, Germany – it is the open source real time communications event in Europe!Secure your participation and register now!See you in Berlin!

Put in a Bug in Apple’s Apple – Alex Gouaillard’s Plan

webrtchacks - Tue, 04/14/2015 - 13:08

Apple Feast photo courtesy of flikr user Overduebook. Licensed under Creative Commons NC2.0.

One of the biggest complaints about WebRTC is the lack of support for it inside Safari and iOS’s webview. Sure you can use a SDK or build your own native iOS app, but that is a lot of work compared to Android which has Chrome and WebRTC inside the native webview on Android 5 (Lollipop) today. Apple being Apple provides no external indications on what it plans to do with WebRTC. It is unlikely they will completely ignore a W3C standard, but who knows if iOS support is coming tomorrow or in 2 years.

Former guest webrtcHacks interviewee Alex Gouillard came to me with an idea a few months ago for helping to push Apple and get some visibility. The idea is simple – leverage Apple’s bug process to publicly demonstrate the desire for WebRTC support today, and hopefully get some kind of response from them. See below for details on Alex’s suggestion and some additional Q&A at the end.

Note: Alex is also involved in the webrtcinwebkit project – that is a separate project that is not directly related, although it shares the same goal of pushing Apple. Stay tuned for some coverage on that topic.

{“intro-by”: “chad“}

Plan to Get Apple to support WebRTC The situation

According to some polls, adding WebRTC support to Safari, especially on iOS and in native apps in iOS, is the most wanted WebRTC item today.

The technical side of the problem is simple: any native app has to follow Apple’s store rules to be accepted in the store. These rules state that any apps that “browse the web” need to use Apple provided WebView [rule 2.17] based on the WebKit framework. Safari is also based on WebKit. WebKit does not Support WebRTC… yet!

First Technical step

The webrtcinwebkit.org project aims at addressing the technical problem within the first half of 2015. However, bringing WebRTC support to WebKit is just part of the overall problem. Only Apple can decide to use it in their products, and they are not commenting about products that have not been released.

There have been lots of signs though that Apple is not opposed to WebRTC in WebKit/Safari.

  • Before the Chrome fork of WebKit/WebCore in what became known as blink, Apple was publicly working on parts of the WebRTC implementation (source)
  • Two umbrella bugs to accept implementation of WebRTC in WebKit are still open and active in WebKit’s bugzilla, with an Apple media engineer in charge (Bug 124288 &  Bug 121101)
  • Apple Engineers, not the usual Apple standard representative, joined the W3C WebRTC working group early 2014 (public list), and participated to the technical plenary meeting in November 2014 (w3c members restricted link)
  • Finally, an early implementation of Media Streams and GetUserMedia API in WebKit was contributed late 2014 (original bug & commit).

So how to let Apple know you want it and soon – potentially this year?

Let Apple know!

Chrome and Internet Explorer (IE), for example, have set up pages for web developers to directly give their feedback about which feature they want to see next (WebRTC related items generally rank high by the way).  There is no such thing yet for Apple’s product.

The only way to formally provide feedback to Apple is through the bug process. One needs to have or create a developer account, and open a bug to let Apple know they want something.  Free accounts are available, so there is no financial cost associated with the process. One can open a bug in any given category, the bugs are then triaged and will end up in “WebRTC” placeholder internally.

Volume counts. The more people will ask for this feature, the most likely Apple is to support it. The more requests the better.

But that is not the only thing that counts. Users of WebRTC libraries, or any third party who has a business depending on WebRTC can also raise their case with Apple that their business would profit from Apple supporting WebRTC in their product. Here too, volume (of business) counts.

As new releases of Safari are usually made with new releases of the OS, and generally in or around September, it is very unlikely to see WebRTC in Safari (if ever) before the next release, late 2015.

We need you

You want WebRTC support on iOS? You can help. See below for a step-by-step guide on how.

How to Guide Step-by-step guide
  1. Register a free Apple Developer account. Whether you are a developer or not does not matter eventually. You will need to make an Apple ID if you do not have one already.
  2. Sign in to the Bug Reporter:
  3. Once signed in, you should see the following screen:
  4. Click on Open, then select Safari:
  5. Go ahead and write the bug report:

It is very important here that you write WHY, in your own words, you want WebRTC support in Safari. There are a multiple of different reasons you might want it:

  • You’re a developer  you have developed a website that requires WebRTC support, and you cannot use it on Safari. If your users are requesting it, please share the volume of request, and/or share the volume of usage you’re getting on non-safari browsers to show the importance of the this for Apple.
  • You’re a company with a WebRTC product or service. You have the same problem as above, and the same suggestions apply.
  • You’re a user of a website that requires WebRTC, and owner of many Apple devices. You would love to be able to use your favorite WebRTC product or service on your beloved device.
  • You’re a company that propose a plugin for WebRTC in Safari, and you would love to get rid of it.
  • others

Often times, some communities organize “bug writing campaigns” that include boilerplate text to include in a bug.  It’s a natural tendency for reviewers to discount those bugs somewhat because they feel like more of a “me too” than a bug filed by someone that took 60 seconds to write up a report in their own words.

{“author”, “Alex Gouaillard“}

{“editor”, “chad“}

Chad’s follow-up Q&A with Alex

Chad: What is Apple’s typical response to these bug filing campaigns?

Alex: I do not have the direct answer to this, and I guess only Apple has. However, here are two very clear comments by an Apple representative:

The only way to let Apple know that a feature is needed is through bug filling.

I would just encourage people to describe why WebRTC (or any feature) is important to them in their own words. People sometimes start “bug writing campaigns” that include boilerplate text to include in a bug, and I think people here have a natural tendency to discount those bugs somewhat because they feel like more of a “me too” than a bug filed by someone that took 60 seconds to write up a report in their own words.”

So my initiative here is not to start a bug campaign per say, where everybody would copy paste the same text, or click the same report to increment a counter. My goal here is to let the community know they can let Apple know their opinion in a way that counts.

[Editor’s note: I was not able to get a direct confirmation from Apple (big suprise) – I did directly confirm  evidence that at least one relevant Apple employee agrees with the sentiment above.]

Chad: Do you have any examples of where this process has worked in the past to add a whole new W3C-defined capability like WebRTC?

Alex: I do not. However, the comment #1 above by Apple representative was very clear that whether it will eventually work or not, there is no other way.

Chad: Is there any kind of threshold on the number of bug filings you think the community needs to meet?

Alex: My understanding is that it’s not so much about the number of people that send bugs, it’s more about the case they make. It’s a blend between business opportunities and number of people. I guess volume counts – whether it is people or dollars. This is why it is so important that people use they own words and describe their own case. 

Let’s say my friends at various other WebRTC Platform-as-a-Service providers desire to show the importance for them of having WebRTC in iOS or Safari- one representative of the company could go in and explain their use case and their numbers for the platform / service. They could also ask their devs to file a bug describing their application they developed on top of their WebRTC platform. They could also ask their users to describe why as users of the WebRTC app that they feel segregated against their friends who owns a Samsung tablet and who can enjoy WebRTC while they cannot on their iPad. (That is just an example, and I do not suggest that they should write exactly this. Again, everybody should use their own word.)

If I understand correctly, it does not matter whether one or several employees of the above named company fill only one or several bugs for the same company use case.

Chad: Are you confident this will be a good use of the WebRTC developer’s community’s time?

Alex: Ha ha. Well, let’s put it that way, the whole process takes around a couple of minutes in general, and maybe just a little bit more for companies that have a bigger use case and want to weight in the balance. Less than what you are spending reading this blog post. If you don’t have a couple of minute to fill a bug to Apple, then I guess you don’t really need the feature.

More seriously, I have been contacted by enough people that just wanted to have a way, anyway, to make it happen, that I know this information will be useful. For the cynics out there, I’m tempted to say, worse case scenario you lost a couple of minutes to prove me wrong. Don’t miss the opportunity.

Yes, I’m positive this will be a good use of everybody’s time.

{“interviewer”, “chad“}

{“interviewee”, “Alex Gouaillard“}

Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart@reidstidolph, @victorpascual and @tsahil.

The post Put in a Bug in Apple’s Apple – Alex Gouaillard’s Plan appeared first on webrtcHacks.

Testing FreeSWITCH performance on Scaleway C1

TXLAB - Sat, 04/11/2015 - 02:23

The dedicated ARM hosting servers at Scaleway appear to be a decent platform for a mid-sized PBX.

In short, the platform displays the following results in performance tests:

  • OPUS<->PCMA transcoding: 16 simultaneous calls with  at about 95% total CPU load and no noticeable distortions.
  • SILK<->PCMA transcoding: 72 simultaneous calls were going without distortions, with average total CPU load at 63%. Higher number of calls resulted in noticeable distortions.
  • G722<->PCMA transcoding: 96 simultaneous calls without distortions, at 76% CPU load, and noticeable distortions for higher numbers.

Test 1: sequential transcoding

The following tests are a slight modification of my previous test scenario: it appears that a channel in OPUS codec cannot execute `echo` or `delay_echo` FreeSWITCH applications, as they copy RTP frames, and the OPUS codec is stateful and does not accept such copying. So, an extra bridge is made to ensure that echo is always executed on a PCMA channel.

XML dialplan in public context (here IPADDR is the public address on the Scaleway host):

  <!-- Extension 100 accepts the initial call, plays echo,        and on pressing *1 it transfers to 101  -->   <extension name="100">     <condition field="destination_number" expression="^100$">       <action application="answer"/>       <action application="bind_meta_app" data="1 a si transfer::101 XML ${context}"/>       <action application="delay_echo" data="1000"/>     </condition>   </extension>       <!-- Extension 101 plays a beep, then makes an outgoing SIP call to        our own external profile and extension 200  -->   <extension name="101">     <condition field="destination_number" expression="^101$">       <action application="playback" data="tone_stream://%(100,100,1400,2060,2450,2600)"/>       <action application="unbind_meta_app" data=""/>       <action application="bridge"               data="{absolute_codec_string=PCMA}sofia/external/200@IPADDR:5080"/>     </condition>   </extension>   <!-- Extension 200 enforces transcoding and sends the call to 201 -->   <extension name="200">     <condition field="destination_number" expression="^200$">       <action application="answer"/>       <action application="bridge"               data="{max_forwards=65}{absolute_codec_string=OPUS}sofia/external/201@IPADDR:5080"/>     </condition>   </extension>       <!-- Extension 201 returns the call to 100, guaranteeing it to be in PCMA -->   <extension name="201">     <condition field="destination_number" expression="^201$">       <action application="answer"/>       <action application="bridge"               data="{max_forwards=65}{absolute_codec_string=PCMA}sofia/external/100@IPADDR:5080"/>     </condition>   </extension>

The initial call is sent to extension 100 in the public context, and then by pressing *1, 6 additional channels are created, of which two calls perform the transcoding from PCMA to OPUS and back. So, if “show channels” shows 43 total channels, it corresponds to 42 = 6*7 test channels plus the incoming one, or 14 transcoding calls.

#### Good quality #### # fs_cli -x 'show channels' | grep total 43 total. # mpstat -P ALL 10                       Linux 3.19.3-192 (scw01)    04/10/2015      _armv7l_        (4 CPU) 10:08:41 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle 10:08:51 PM  all   82.67    0.00    2.75    0.00    0.00    1.30    0.00    0.00   13.28 10:08:51 PM    0   92.80    0.00    1.30    0.00    0.00    5.20    0.00    0.00    0.70 10:08:51 PM    1   95.30    0.00    1.60    0.00    0.00    0.00    0.00    0.00    3.10 10:08:51 PM    2   89.90    0.00    2.50    0.00    0.00    0.00    0.00    0.00    7.60 10:08:51 PM    3   52.70    0.00    5.60    0.00    0.00    0.00    0.00    0.00   41.70 10:08:51 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle 10:09:01 PM  all   84.88    0.00    2.43    0.00    0.00    1.23    0.00    0.00   11.47 10:09:01 PM    0   94.50    0.00    0.50    0.00    0.00    4.90    0.00    0.00    0.10 10:09:01 PM    1   97.60    0.00    1.50    0.00    0.00    0.00    0.00    0.00    0.90 10:09:01 PM    2   87.70    0.00    2.20    0.00    0.00    0.00    0.00    0.00   10.10 10:09:01 PM    3   59.70    0.00    5.50    0.00    0.00    0.00    0.00    0.00   34.80 #### quite OK quality, with some minor distortions #### # fs_cli -x 'show channels' | grep total 49 total. # mpstat -P ALL 10                       Linux 3.19.3-192 (scw01)    04/10/2015      _armv7l_        (4 CPU) 10:10:29 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle 10:10:39 PM  all   95.65    0.00    2.40    0.00    0.00    0.83    0.00    0.00    1.12 10:10:39 PM    0   95.30    0.00    1.20    0.00    0.00    3.30    0.00    0.00    0.20 10:10:39 PM    1   96.90    0.00    2.20    0.00    0.00    0.00    0.00    0.00    0.90 10:10:39 PM    2   95.80    0.00    3.50    0.00    0.00    0.00    0.00    0.00    0.70 10:10:39 PM    3   94.60    0.00    2.70    0.00    0.00    0.00    0.00    0.00    2.70 10:10:39 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle 10:10:49 PM  all   91.55    0.00    1.55    0.00    0.00    0.78    0.00    0.00    6.12 10:10:49 PM    0   89.90    0.00    1.20    0.00    0.00    3.10    0.00    0.00    5.80 10:10:49 PM    1   96.60    0.00    0.70    0.00    0.00    0.00    0.00    0.00    2.70 10:10:49 PM    2   90.60    0.00    1.70    0.00    0.00    0.00    0.00    0.00    7.70 10:10:49 PM    3   89.10    0.00    2.60    0.00    0.00    0.00    0.00    0.00    8.30 #### bad quality, barely audible #### # fs_cli -x 'show channels' | grep total 55 total.

If OPUS codec is replaced with SILK in the above configuration, the test is not usable, as SILK appears not to tolerate multiple transcodings, and after 4 transcodings the sound is almost not propagated at all. Also further transcoding sessions treat the input as silence, and do not load CPU.

If G722 is used, 36 transcoded calls still leave plenty of CPU resources for other tasks:

# fs_cli -x 'show channels' | grep total 109 total. # mpstat -P ALL 10                       Linux 3.19.3-192 (scw01)    04/10/2015      _armv7l_        (4 CPU) 10:37:31 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle 10:37:41 PM  all   19.75    0.00    5.40    0.00    0.00    0.00    0.00    0.00   74.85 10:37:41 PM    0   27.00    0.00   12.10    0.00    0.00    0.00    0.00    0.00   60.90 10:37:41 PM    1    4.30    0.00    9.50    0.00    0.00    0.00    0.00    0.00   86.20 10:37:41 PM    2   47.60    0.00    0.00    0.00    0.00    0.00    0.00    0.00   52.40 10:37:41 PM    3    0.10    0.00    0.00    0.00    0.00    0.00    0.00    0.00   99.90 10:37:41 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle 10:37:51 PM  all   17.57    0.00    7.42    0.00    0.00    0.00    0.00    0.00   75.00 10:37:51 PM    0    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00 10:37:51 PM    1   20.30    0.00   29.70    0.00    0.00    0.00    0.00    0.00   50.00 10:37:51 PM    2   50.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00   50.00 10:37:51 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00  Test 2: parallel transcoding

The following piece of public dialplan takes the call at extension 300, makes a call in OPUS to extension 301, and then the call is bridged to 302 in PCMA where a speech test file is played endlessly. Thus, a call to 300 produces 5 channels, which are equivalent of two transcoded calls.

  <extension name="300">     <condition field="destination_number" expression="^300$">       <action application="answer"/>       <action application="bridge"               data="{absolute_codec_string=OPUS}sofia/external/301@IPADDR:5080"/>     </condition>   </extension>       <extension name="301">     <condition field="destination_number" expression="^301$">       <action application="answer"/>       <action application="bridge"               data="{absolute_codec_string=PCMA}sofia/external/302@IPADDR:5080"/>     </condition>   </extension>       <extension name="302">     <condition field="destination_number" expression="^302$">       <action application="answer"/>       <action application="endless_playback" data="/var/tmp/t02.wav"/>     </condition>   </extension>

In parallel to a call to 300 from outside, additional endless calls were produced from fs_cli:

originate sofia/external/300@IPADDR:5080 &endless_playback(/var/tmp/t02.wav)

This originate command produced 6 new channels, equivalent to two transcoded calls. The command was repeated until the human caller hears any distortions.

OPUS transcoding was functioning fine with 16 transcoded calls and 95% average CPU load, while SILK and G722 started showing distortions at around 65-75% of CPU load.

 

 


Filed under: Networking Tagged: arm, freeswitch, pbx, scaleway, voip

From SQL Tables to Kamailio Hash Tables

miconda - Thu, 04/09/2015 - 23:55
Eloy Coto Pereiro has published recently another blog post that can be useful in the case one needs to cache content of custom database tables in Kamailio’s memory via htable module. The article uses Postgresql as database server, but same mechanism can be used for other database servers.You can read the article at:Using caching is a good way to improve the performances and htable is a very flexible mechanism in Kamailio configuration file, with plenty of options to tune the caching rules.Enjoy!

Installing FreeSWITCH on Scaleway C1

TXLAB - Wed, 04/08/2015 - 13:13

Scaleway (a cloud service by online.net) offers ARM-based dedicated servers for EUR9.99/month, and the first month free. The platform is powerful enough to run a small or FreeSWITCH server, and it shows nice results in voice quality tests.

These instructions are for Debian Wheezy distribution.

By default, the server is created with Linux kernel 3.2.34, and this kernel version does not have a high-resolution timer. You need to choose 3.19.3 in server settings.

At Scaleway, you get a dedicated public IP address and 1:1 NAT to a private IP address on your server. So, FreeSWITCH SIP profiles need to be updated (“ext-rtp-ip” and “ext-sip-ip” to point to you rpublic IP address).

FreeSWITCH compiles and links “mpg123-1.13.2″ library, which fails to compile on ARM architecture.  You need to edit the corresponding files to point to “mpg123-1.19.0″ and commit back to Git, because the build scripts check if any modified and uncommitted files exist in the source tree. Also the patch forces to use gcc-4.7, as 4.6 is known with some problems on ARM architecture.

apt-get update && apt-get install -y make curl git sox flac mkdir -p /usr/src/freeswitch cd /usr/src/freeswitch/ git clone https://gist.github.com/b27f4e41cc02f49d31a0.git git clone -b v1.4 https://stash.freeswitch.org/scm/fs/freeswitch.git /usr/src/freeswitch/src cd src git apply ../b27f4e41cc02f49d31a0/freeswitch-arm.patch git add --all git commit -m 'mpg123-1.19.0.patch' ./debian/util.sh build-all -i -z1 -aarmhf -cwheezy # This will run for about 4 hours, and you can build the sound packages in parallel in another terminal. mkdir /usr/src/freeswitch-sounds cd /usr/src/freeswitch-sounds git clone https://github.com/traviscross/freeswitch-sounds.git music-default cd music-default ./debian/bootstrap.sh -p freeswitch-music-default ./debian/rules get-orig-source tar -xv --strip-components=1 -f *_*.orig.tar.xz && mv *_*.orig.tar.xz ../ dpkg-buildpackage -uc -us -Zxz -z1 cd /usr/src/freeswitch-sounds git clone https://github.com/traviscross/freeswitch-sounds.git sounds-en-us-callie cd sounds-en-us-callie ./debian/bootstrap.sh -p freeswitch-sounds-en-us-callie ./debian/rules get-orig-source tar -xv --strip-components=1 -f *_*.orig.tar.xz && mv *_*.orig.tar.xz ../ dpkg-buildpackage -uc -us -Zxz -z1 cd /usr/src/freeswitch-sounds dpkg -i *.deb cd /usr/src/freeswitch # this will fail because dependencies are not installed dpkg -i freeswitch-all_* # this will add dependencies apt-get -f install # finally, install FreeSWITCH dpkg -i freeswitch-all_* # Minimal configuration that you can use cd /etc git clone https://github.com/voxserv/freeswitch_conf_minimal.git freeswitch # edit sip_profiles/*.xml and put the public IP address into "ext-rtp-ip" and "ext-sip-ip" insserv freeswitch service freeswitch start
Filed under: Networking Tagged: arm, freeswitch, pbx, scaleway, voip

Kamailio v4.2.4 Released

miconda - Thu, 04/02/2015 - 17:21
Kamailio SIP Server v4.2.4 stable is out – a minor release including fixes in code and documentation since v4.2.3 – configuration file and database compatibility is preserved.Kamailio (former OpenSER) v4.2.4 is based on the latest version of GIT branch 4.2, therefore those running previous 4.2.x versions are advised to upgrade. There is no change that has to be done to configuration file or database structure comparing with older v4.2.x.Resources for Kamailio version 4.2.4Source tarballs are available at:Detailed changelog:Download via GIT: # git clone git://git.kamailio.org/kamailio kamailio
# cd kamailio
# git checkout -b 4.2 origin/4.2Binaries and packages will be uploaded at:Modules’ documentation:What is new in 4.2.x release series is summarized in the announcement of v4.2.0:Looking forward to meeting many of you at Kamailio World 2015!

3CX vince il premio “Prodotto più Innovativo” con 3CX WebMeeting

Libera il VoIP - Tue, 03/31/2015 - 18:11

MONACO DI BAVIERA, GERMANIA, 27 MARZO 20153CX, azienda sviluppatrice del centralino software di ultima generazione 3CX Phone System, con il nuovo prodotto 3CX WebMeeting ha sbaragliato i concorrenti nella categoria “Unified Communication” per il premio “Prodotto più Innovativo”. Questo è avvenuto in occasione del CeBIT 2015 di Hannover, una delle più importanti fiere IT del mondo. Il premio è stato ritirato dal CEO Nick Galea e da Markus Kogel, Sales Manager area EMEA.

3CX WebMeeting è stato scelto per il suo uso innovativo della tecnologia WebRTC. WebRTC è la nuova piattaforma open standard di Google che consente agli utenti di lanciare webmeetings direttamente dal browser, senza dover scaricare ed installare nessun client. 3CX ha lanciato la versione hosted di 3CX WebMeeting nell’agosto 2014 e la versione on-premise nel febbraio 2015. Fin dal suo lancio 3CX WebMeeting ha ricevuto riscontri positivi sia dai partner che dagli utenti finali. 3CX WebMeeting è gratis fino a 10 utenti contemporanei per tutte le licenze di 3CX Phone System v12.5.

I premi Innovationpreis-IT 2015 Award sono organizzati da Initiative Mittelstand, un portale on-line di informazione che fornisce alle aziende aggiornamenti sui prodotti e le tecnologie più innovative disponibili.

Nick Galea, 3CX CEO ha detto:

“Questo premio è il riconoscimento di 3CX come un’azienda all’avanguardia nel settore della telefonia e dell’Unified Communications. Siamo i primi vendor ad offrire una soluzione di videoconferenza multi-punto su tecnologia WebRTC che è inoltre integrata con il nostro centralino senza costi aggiuntivi. Il premio “Prodotto più innovativo”, selezionato da una giuria di esperti, è un riconoscimento molto prestigioso in Germania e noi siamo molto felici del fatto che la nostra capacità di innovare venga riconosciuta all’interno del settore IT”

Informazioni su 3CX (www.3cx.it)

3CX è lo sviluppatore del sistema telefonico 3CX, una piattaforma di comunicazione unificata a standard aperto per Windows che funziona con telefoni standard SIP e sostituisce qualunque tipo di centralino telefonico proprietario. Il sistema telefonico 3CX è gestibile più facilmente rispetto ai sistemi PBX standard e garantisce un notevole risparmio sui costi ed un aumento della produttività. Alcune fra le più importanti aziende e organizzazioni mondiali utilizzano il sistema telefonico 3CX, tra cui Boeing, Mitsubishi Motors, Intercontinental Hotels & Resorts, Harley Davidson, Città di Vienna e Pepsi.

3CX è stato insignito del 2014 Comms National Award per la categoria ‘Miglior soluzione enterprise per l’installazione in loco’, è stato annoverato nella Annual Network Connectivity Services Partner Program Guide di CRN per il 2014 ed è stato premiato con un punteggio di 5 stelle nel programma partner di CRN nel 2013. 3CX è stato inoltre riconosciuto come Venditore Emergente da CRN nel 2011 e nel 2012, ha ricevuto la certificazione Windows Server e ha vinto svariati premi: il Gold Award di Windowsnetworking.com, il Windows IT Pro 2008 Editor’s Best Award ed un premio come miglior prodotto da Computer Shopper.

3CX ha uffici in Australia, Cipro, Germania, Italia, Sud Africa, Regno Unito e Stati Uniti. Visitate il sito web http://www.3cx.com, la pagina Facebook www.facebook.com/3CX e il canale Twitter @3cx.

Approfondimenti

  • 3CX è sponsor al Microsoft Ignite 2015!

    3CX è Silver Sponsor al Microsoft Ignite 2015, che si terrà a Chicago dal 4 all’8 Maggio.

    Il focus principale del Microsoft Ignite di quest’anno è la tecnologia Cloud, la Unified Communication e [...]

  • Fon Antenna: l’analisi del prodotto !

    Il movimiento FON ha da poco lanciato sul mercato la FONTENNA appositamente studiata per ampliare la portata dei nostri hotspot casalinghi. Analizziamo da vicino le caratteristiche di questa antenna da 6,5db di [...]

Mai più chiamate perse con il nuovo 3CXPhone per Mac

Libera il VoIP - Tue, 03/31/2015 - 18:09

Fedele alla sua reputazione di azienda innovatrice, 3CX è uno dei primi produttori di centralini telefonici ad offrire un client per Mac completo di funzionalità professionali. Con il nuovo aggiornamento del popolare 3CXPhone per Mac, gli utenti ricevono una notifica via mail quando perdono una chiamata. Questo è perfetto per gli utenti sempre in viaggio e lontani dalla scrivania che saranno così sempre avvisati di ogni chiamata persa e potranno richiamare.

Altre novità nell’aggiornamento del 3CXPhone per Mac
  • Nuovo VoIP Client Engine.
  • Possibilità di ri-approvvigionamento dalla Console di gestione del 3CX Phone System.
  • Notifica per chiamate che abbandonano la coda.
  • Aggiunto tema su base “White”.
  • Aggiunto supporto in linguaggio internazionale.
  • Aggiunta funzionalità “drag and drop” per i files .3cxconfig, .cer e .crt.
  • Aggiunti campi “Business Fax” e “Home Fax” nella descrizione Contatti.
  • Aggiunta “SLA Breach” per le chiamate in coda.
  • Aggiunta opzione DND nello stato Auto Profile Status quando l’app è in idle.

Per maggiori informazioni sulle nuove funzionalità guarda quì. Scarica 3CXPhone per Mac quì.

Approfondimenti

VUC – 8 Years

miconda - Tue, 03/31/2015 - 15:09
The VoIP Users Conference is celebrating 8 years on the air. The weekly online meetup is going to have its 535th session during a 24 hours voipathon, starting at 12:00pm PDT (20:00 London time) on Thursday, the 2nd of April, 2015. You can find more details about the session, including the options to join via audio, video or irc, at:Big credits to Randy Resnick, who started VUC, kept it going every week for the past years and he is still steering its future. Kamailio developers and users are glad to have been part of many sessions, presenting about latest news related to the project or joining sessions to debate hot topics of the real time communications world at the moment.Prepare yourself to pop up online and join the VUC voipathon even for a bit, say hi and tell shortly what is new in your world of communications!Randy and many VUC friends will be at Kamailio World Conference 2015, May 27-29, in Berlin, Germany, with VUC Visions session – be sure don’t miss the event where you can meet the people that had a relevant impact in transformation of the real time communications over the past years and work on defining their future!

The Minimum Viable SDP

webrtchacks - Tue, 03/31/2015 - 13:30

Unnatural shrinkage. Photo courtesy Flikr user Ed Schipul

 

One evening last week, I was nerd-sniped by a question Max Ogden asked:

That is quite an interesting question. I somewhat dislike using Session Description Protocol (SDP)  in the signaling protocol anyway and prefer nice JSON objects for the API and ugly XML blobs on the wire to the ugly SDP blobs used by the WebRTC API.

The question is really about the minimum amount of information that needs to be exchanged for a WebRTC connection to succeed.

 WebRTC uses ICE and DTLS to establish a secure connection between peers. This mandates two constraints:

  1. Both sides of the connection need to send stuff to each other
  2. You need at minimum to exchange ice-ufrag, ice-pwd, DTLS fingerprints and candidate information

Now the stock SDP that WebRTC uses (explained here) is a rather big blob of text, more than 1500 characters for an audio-video offer not even considering the ICE candidates yet.

Do we really need all this?  It turns out that you can establish a P2P connection with just a little more than 100 characters sent in each direction. The minimal-webrtc repository shows you how to do that. I had to use quite a number of tricks to make this work, it’s a real hack.

How I did it Get some SDP

First, we want to establish a datachannel connection. Once we have this, we can potentially use it negotiate a second audio/video peerconnection without being constrained in the size of the offer or the answer. Also, the SDP for the data channel is a lot smaller to start with since the is no codec negotiation. Here is how to get that SDP:

var pc = new webkitRTCPeerConnection(null); var dc = pc.createDataChannel('webrtchacks'); pc.createOffer( function (offer) { pc.setLocalDescription(offer); console.log(offer.sdp); }, function (err) { console.error(err); } );

The resulting SDP is slightly more than 400 bytes. Now we need also some candidates included, so we wait for the end-of-candidates event:

pc.onicecandidate = function (event) { if (!event.candidate) console.log(pc.localDescription.sdp); };

The result is even longer:

v=0 o=- 4596489990601351948 2 IN IP4 127.0.0.1 s=- t=0 0 a=msid-semantic: WMS m=application 47299 DTLS/SCTP 5000 c=IN IP4 192.168.20.129 a=candidate:1966762134 1 udp 2122260223 192.168.20.129 47299 typ host generation 0 a=candidate:211962667 1 udp 2122194687 10.0.3.1 40864 typ host generation 0 a=candidate:1002017894 1 tcp 1518280447 192.168.20.129 0 typ host tcptype active generation 0 a=candidate:1109506011 1 tcp 1518214911 10.0.3.1 0 typ host tcptype active generation 0 a=ice-ufrag:1/MvHwjAyVf27aLu a=ice-pwd:3dBU7cFOBl120v33cynDvN1E a=ice-options:google-ice a=fingerprint:sha-256 75:74:5A:A6:A4:E5:52:F4:A7:67:4C:01:C7:EE:91:3F:21:3D:A2:E3:53:7B:6F:30:86:F2:30:AA:65:FB:04:24 a=setup:actpass a=mid:data a=sctpmap:5000 webrtc-datachannel 1024

Only take what you need

We are only interested in a few bits of information here: 

  1. the ice-ufrag: 1/MvHwjAyVf27aLu
  2. the ice-pwd: 3dBU7cFOBl120v33cynDvN1E
  3. the sha-256 DTLS fingerprint: 75:74:5A:A6:A4:E5:52:F4:A7:67:4C:01:C7:EE:91:3F:21:3D:A2:E3:53:7B:6F:30:86:F2:30:AA:65:FB:04:24
  4. the ICE candidates

The ice-ufrag is 16 characters due to randomness security requirements from RFC 5245. While it is possible to reduce that, it’s probably not worth the effort. The same applies to the 24 characters of the ice-pwd. Both are random so there is not much to gain from compressing them even.

The DTLS fingerprint is a representation of the 256 bytes of the sha-256 hash. It’s length can easily be reduced from 95 characters to almost optimal (assuming we want to be binary-safe) 44 characters: 

var line = "a=fingerprint:sha-256 75:74:5A:A6:A4:E5:52:F4:A7:67:4C:01:C7:EE:91:3F:21:3D:A2:E3:53:7B:6F:30:86:F2:30:AA:65:FB:04:24"; var hex = line.substr(22).split(':').map(function (h) { return parseInt(h, 16); }); console.log(btoa(String.fromCharCode.apply(String, hex))); // yields dXRapqTlUvSnZ0wBx+6RPyE9ouNTe28whvIwqmX7BCQ=

So we have So we’re at 84 characters now. We can hardcode everything else in the application.

Dealing with candidates

Let’s look at the candidates. Wait, we got only host candidates. This is not going to work unless people are on the same network. STUN does not help much either since it only works in approximately 80% of all cases.

So we need candidates that were gathered from a TURN server. In Chrome, the easy way to achieve this is to set the iceTransports constraint to ‘relay’ which will not even gather host and srflx candidates. In Firefox, you need to ignore all non-relay candidates currently.

If you use the minimal-webrtc demo you need to use your own TURN credentials, the ones in the repository will no longer work since they’re using the time-based credential scheme. Here is what happened on my machine was that two candidates were gathered:

a=candidate:1211076970 1 udp 41885439 104.130.198.83 47751 typ relay raddr 0.0.0.0 rport 0 generation 0 a=candidate:1211076970 1 udp 41819903 104.130.198.83 38132 typ relay raddr 0.0.0.0 rport 0 generation 0

I believe this is a bug in chrome which gathers a relay candidate for an interface which is not routable, so I filed an issue.

Lets look at the first candidate using the grammar defined in RFC 5245: 

  1. the foundation is 1211076970
  2. the component is 1. Another reason for using the datachannel, there are no RTCP candidates
  3. the transport is UDP
  4. the priority is 41885439
  5. the IP address is 104.130.198.83 (the ip of the TURN server I used)
  6. the port is 47751
  7. the typ is relay
  8. the raddr and rport are set to 0.0.0.0 and 0 respectively in order to avoid information leaks when iceTransports is set to relay
  9. the generation is 0. This is a Jingle extension of vanilla ICE that allows detecting ice restarts

If we were to simply append both candidates to the 84 bytes we already have we would end up with 290 bytes. But we don’t need most of the information in there.

The most interesting information is the IP and port. For IPv4, that is 32bits for the IP and 16 bits for the port. We can encode that using btoa again which yields 7 + 4 characters per candidate. Actually, if both candidates share the same IP, we can skip encoding it again, reducing the size.

After consulting RFC 5245 it turned out that the foundation and priority can actually be skipped, even though that requires some effort. And everything else can be easily hard-coded in the application. 

sdp.length = 106

Let’s summarize what we have so far: 

  1. the ice-ufrag: 16 characters
  2. the ice-pwd: 22 characters
  3. the sha-256 DTLS fingerprint: 44 characters
  4. the ip and port: 11 characters for the first candidate, 4 characters for subsequent candidates from the same ip.

Now we also want to encode whether this is an offer or an answer. Let’s use uppercase O and A respectively. Next, we concatenate this and separate the fields with a ‘,’ character. While that is less efficient than a binary encoding or one that relies on fixed field lengths, it is flexible. The result is a string like:

O,1/MvHwjAyVf27aLu,3dBU7cFOBl120v33cynDvN1E, dXRapqTlUvSnZ0wBx+6RPyE9ouNTe28whvIwqmX7BCQ=, 1k85hij,1ek7,157k

106 characters! So that is tweetable. Yay!

You better be fast

Now, if you try this it turns out it does not usually work unless you are fast enough pasting stuff.

ICE is short for Interactive Connectivity Establishment. If you are not fast enough in transferring the answer and starting ICE at the Offerer, it will fail. You have less than 30 seconds between creating the answer at the Answerer and setting it at the Offerer. That’s pretty tough for humans doing copy-paste. And it will not work via twitter.

What happens is that the Answerer is trying to perform connectivity checks as explained in RFC 5245. But those never reach the Offerer since we are using a TURN server. The TURN server does not allow traffic from the Answerer to be relayed to the Offerer before the Offerer creates a TURN permission for the candidate, which it can only do once the Offerer receives the answer. Even if we could ignore permissions, the Offerer can not form the STUN username without the Answerer’s ice-ufrag and ice-pwd. And if the Offerer does not reply to the connectivity checks by Answerer, the Answerer will conclude that ICE has failed.

 

So what was the point of this?

Now… it is pretty hard to come up with a use-case for this. It fits into an SMS. But sending your peer an URL where you both connect using a third-party signaling server is a lot more viable most of the time. Especially given that to achieve this, I had to make some tough design decisions like forcing a TURN server and taking some shortcuts with the ICE candidates which are not really safe. Also, this cannot use trickle ice.

¯\_(ツ)_/¯

(thanks, Max)

So is this just a case study in arcane signaling protocols? Probably. But hey, I can now use IRC as a signaling protocol for WebRTC. IRC has a limit of 512 characters so one can include more candidates and information even. CTCP WEBRTC anyone?

{“author”: “Philipp Hancke“}

Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart@reidstidolph, @victorpascual and @tsahil.

The post The Minimum Viable SDP appeared first on webrtcHacks.

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.