News from Industry

OMG WebRTC is tracking me! Or is it?

webrtchacks - Thu, 11/05/2015 - 15:23

There has been more noise about WebRTC making it possible to track users. We have covered some of the nefarious uses of WebRTC and look out for it before. After reading a blog post on this topic covering some allegedly new unaddressed issues a week ago I decided to ignore it after some discussion on the mozilla IRC channel. But this has some up on a the twitter-sphere again and Tsahi said ‘ouch’, here are my thoughts.

Claims

The blog post (available here) makes a number of claims about how certain Chrome behavior makes fingerprinting easier:

  • Chrome started caching certificates for 30 days recently, creating a cookie-like attack surface for privacy
  • this allows cross-origin tracking of users
  • the incognito mode behavior is inconsistent with respect to this

Caching certificates

First, there is a claim that the way Chrome caches certificates changed recently:

In the past, Google Chrome used to generate a new self-signed certificate for every WebRTC PeerConnection. But now (using Chrome 46, or maybe earlier as i did not check) it generates a self-signed certificate which is valid for one month and uses it for all PeerConnections of a particular domain.

The code used to demonstrate this behaviour is rather odd, too. It uses the getStats API to the query the fingerprint, which is also available more easily in the SDP.

Chrome has cached certificates in this way for about two years, this is not real news. One of the reasons for this is that it is rather expensive to generate the current private keys for DTLS, especially on mobile devices. In the future, there will be more control over this behaviour. Neither Firefox nor Edge currently cache certificates.

To be fair, the WebRTC team made a serious blunder here. Until Chrome 45, the certificate was not cleared when cookies were cleared, only when all data was cleared. The bugfix for this only appeared in the Chrome 47 release notes:

Issue 510850 DTLS cert should be cleared when cookies are cleared

Cross-Origin Tracking

So this part is not really news. The second claim made in the blog post is that this enables cross-origin tracking:

To test this go to http://www.kapejod.org/tracking/test.html and to http://kapejod.org/tracking/test.html. Open the network tab of Chrome’s developer console and compare the urls of the requested “tracking.png”. They should contain the same fingerprint, now!

They do. Now, let’s look at this test page:

// make up some random id var transactionId = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {var r = Math.random()*16|0,v=c=='x'?r:r&0x3|0x8;return v.toString(16);}); var fragment = document.createDocumentFragment(); var div = document.createElement("DIV"); div.innerHTML = '<iframe src="http://kapejod.org/tracking/identify.html?'+transactionId+'" width="1" height="1" style="display:none;"/>'; fragment.appendChild(div); document.body.insertBefore(fragment, document.body.childNodes[document.body.childNodes.length - 1]);

It includes the URL http://kapejod.org/tracking/identify.html. Let’s also look at the code there as well. It executes the code shown above and logs the fingerprint to the console:

console.log('your fingerprint is: ' + fingerprint);

Now why is the fingerprint the same? Well, the iframe is always included from kapejod.org. Which means the Javascript is executed within the context of this origin.
So Chrome can use the persisted fingerprint. As well as any cookies and localStorage data. The attack surface here is no worse than setting a cookie.

Another thing related to this (and I am surprised this has not yet been mentioned) are the deviceIds returned by navigator.mediaDevices.enumerateDevices. Those are also persisted with the same lifetime as cookies. The W3C mediacapture specification has a paragraph about security and privacy considerations on this:

The identifiers for the devices are designed to not be useful for a fingerprint that can track the user between origins, but the number of devices adds to the fingerprint surface. It recommends to treat the per-origin persistent identifier deviceId as other persistent storages (e.g. cookies) are treated.

Again, WebRTC and other HTML5 techniques increase the fingerprint surface. But by design, this is not worse than cookies or equivalent techniques like localStorage.

Incognito Mode

Last but not least the blog post makes claims about the incognito mode:

But to make it generate a new one you have to close ALL incognito tabs. Otherwise you can be tracked across multiple domains.

Again, this behaviour is consistent with the incognito mode behaviour for things like localStorage. In both Chrome and Firefox. In incognito mode, open a site, set something in localStorage. Open another tab. Close first tab. Navigate to same site. Check localStorage. Boo!

tl;dr

There is no real news here. In Germany, we call this ‘olle kamellen’.

{“author”: “Philipp Hancke“}

Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.

The post OMG WebRTC is tracking me! Or is it? appeared first on webrtcHacks.

WebRTC Testing Challenges: An Upcoming Webinar and a Recent Session

bloggeek - Thu, 11/05/2015 - 12:00

Announcing an upcoming free webinar on the challenges of WebRTC testing.

This week I took a trip to San Francisco, where the main goal was to attend WebRTC Summit and talk there about the challenges of WebRTC testing. This was part of the marketing effort we’re placing at testRTC. It is a company I co-founded with a few colleagues alongside my consulting business.

During the past year, we’ve gained a lot of interesting insights regarding the current state of testing in the WebRTC ecosystem. Which made for good presentation material. The session at the WebRTC Summit went rather well with a lot of positive feedback. One such comment made was this one that I received by email later during that day:

I liked much your presentation which indeed digs into one of the most relevant problems of WebRTC applications, which is not generally discussed in conferences.

My own favorite, is what you can see in the image I added above – many of the vendors our there just don’t make the effort to test their WebRTC implementations properly – not even when they go to production.

I’ve identified 5 main challenges that are facing WebRTC service developers:

  1. Browser vendor changes (hint: they are many, and they break things)
  2. NAT traversal (testing it isn’t trivial)
  3. Server scale (many just ignore this one)
  4. Service uptime (checking for the wrong metric)
  5. Orchestration (a general challenge in WebRTC testing)

The slides from my session are here below:

Overcoming the Challenges in Testing WebRTC Services from Tsahi Levent-levi

 

That said, two weeks from now, I will be hosting a webinar with the assistance of Amir Zmora on this same topic. While some of the content may change, most of it will still be there. If you are interested, be sure to join us online at no cost. To make things easier for you, there are two sessions, to fit any timezone.

When? Wednesday, November 18

Session 1: 8 AM GMT, 9 AM CET, 5 PM Tokyo

Session 2: 4 PM GMT, 11 AM EDT, 8 AM PDT

Register now

 

Test and Monitor your WebRTC Service like a pro - check out how testRTC can improve your service' stability and performance.

The post WebRTC Testing Challenges: An Upcoming Webinar and a Recent Session appeared first on BlogGeek.me.

6th FOKUS FUSECO Forum

miconda - Tue, 11/03/2015 - 21:00
Fraunhofer Fokus Research Institute, the place where SIP Express Router (SER) project started (which over the time resulted in Kamailio project), is organizing the 6th edition of FUSECO Forum during Nov 5-6, 2015, in Berlin, Germany.The two days event combines practical workshops with panels and keynote presentations, revealing what is the trend in real time communications, from classic telephony, 4/5G to IoT, smart cities and machine to machine communications.For more details, see:Representative from Kamailio community will be at the event, myself included, along with Dragos Vingarzan (initial developer of IMS extensions) and Elena-Ramona Modroiu (core developer).

Can Apple’s On-Device Analytics Compete with Google and Facebook?

bloggeek - Tue, 11/03/2015 - 12:00

I wonder. Can Apple maintain its lead without getting deep and dirty in analytics?

Apple decided to “take the higher ground”. It has pivoted this year focusing a lot around privacy. Not maintaining user keys for one, but also collecting little or no information from devices and doing as much as possible analytics on device. For now, it seems to be working.

But can it last?

Let’s head 5 or 10 years into the future.

Now lets look at Google and Facebook. Both have voracious appetite to data. Both are analytics driven to the extreme – they will analyze everything and anything possible to improve their service. Where improving it may mean increasing its stickiness, increasing ROI and ARPU, etc.

As time goes by, computing power increases, but also the technology and understanding we have at our disposal in sifting through and sorting out huge amounts of data. We call it Big Data and it is changing all the time. A year or two ago, most discussions on big data were around Hadoop and workloads. This year it was all about real time and Spark. There’s now a shift happening towards machine learning (as opposed to pure analytics), and from there, we will probably head towards artificial intelligence.

To get better at it, there are a few things that need to be in place as well as ingrained into a company’s culture:

  1. You need to have lots and lots of data. The more the merrier
  2. The data needs to be available, and the algorithms put in place need to be tweaked and optimized daily. Think about how Google changes its search ranking algorithm all the time
  3. You need to be analytics driven. It needs to be part and parcel of your products and services – not something done as an afterthought in a data warehouse to generate a daily report to a manager

These traits are already there for Google and Facebook. I am less certain regarding Apple.

Fast forward 5 to 10 years.

  • Large companies collect even more data
  • Technologies and algorithms for analytics improve
  • Services become a lot more smart, personalized and useful

Where would that leave Apple?

If a smartphone (or whatever device we will have at that time) really becomes smart – would you pick out the shiny toy with the eye candy UI or the one that gets things done?

Can Apple stay long term with its stance towards data collection policies or will it have to end up collecting more data and analyzing it the way other companies do?

The post Can Apple’s On-Device Analytics Compete with Google and Facebook? appeared first on BlogGeek.me.

FreeSWITCH Week in Review (Master Branch) October 24th-October 31st

FreeSWITCH - Tue, 11/03/2015 - 00:07

FreeSWITCH got some neat improvements this week with work going into improving the handling of vw and vh core file parameters in mod_av to avoid video cropping and crashing, the addition of a new configuration setting in mod_opus to show the decoder stats at the end of the call, and exposing SRTP and SRTCP crypto keys as channel variables to help with debugging.

Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.

New features that were added:

  • FS-8281 [core] Expose SRTP and SRTCP crypto keys as channel variables to aid with debugging
  • FS-8313 [mod_opus] Introduced new configuration setting ‘decoder-stats’ to show decoder stats at end of call (how many times it did PLC or FEC)
  • FS-8380 [mod_av] Improve the handling of vw and vh core file parameters to avoid video cropping and crashing

Improvements in build system, cross platform support, and packaging:

  • FS-8389 [build] Fixed msvc 2015 build warnings
  • FS-8398 [Ubuntu] Added event_handlers/mod_amqp to avoided modules for Ubuntu 14.04 Trusty

The following bugs were squashed:

  • FS-8222 [verto_communicator] Updated getScreenId.js in order to detect plugin issues and attached an ‘ended’ event to screenshare stream in order to detect ‘stop sharing’ click
  • FS-8392 [mod_av] Fixed rtpmap to allow both H263 and H263+ codecs to be offered
  • FS-8373 [mod_av] Fix for bad recording quality when using fast encoding
  • FS-8397 [core] Fixed a race condition incrementing the event-sequence number
  • FS-8154 [core] Fixed a segmentation fault occurring while eavesdropping on video call
  • FS-8391 [core] Fixed a SDP parsing error for rtcp-fb
  • FS-8319 [mod_opus] Fixed and cleaned up switch_opus_has_fec() and switch_opus_info() to avoid FALSE positives for packets with FEC at high frame sizes.
  • FS-8344 [mod_opus] Toggle FEC ON only on the last frame which is to be packed

The FreeSWITCH 1.4 branch had a few bug fixes added this week.

The following bugs were squashed:

  • FS-8338 [core] Fixed an issue when setting the ringback variable with an outbound call via the bridge app, if the inbound leg is stereo the ringback tone is still rendered as mono causing the resulting ringback to be higher pitched and incorrect.
  • FS-8378 [mod_esf] [core] Fixed a crash when using esf_page over loopback when transcoding and added tests for esf over loopback. Also refactor a bit to clarify code and get better debug in gdb
  • FS-8370 [mod_rayo] Fixed another place in where a message was freed after being queued for delivery. This resulted in a freed object being serialized, crashing FS

Where’s the Socket.io of WebRTC’s Data Channel?

bloggeek - Mon, 11/02/2015 - 12:00

Someone should build a generic fallback…

If you don’t know Socket.io then here’s the gist of it:

  • Socket.io is a piece of JS client code, and a server side implementation
  • It enables writing message passing code between a client and a server
  • It decides on its own what transport to use – WebSocket, XHR, SSE, Flash, pigeons, …

It is also very popular – as a developer, it lets you assume a WebSocket like interface and develop on top of it; and it takes care of all the mess of answering the question “but what if my browser/proxy/whatever doesn’t support WebSocket?

I guess there are use cases where the WebRTC data channel is like that – you’d love to have the qualities it gives you, such as reduced server load and latency, but you can live without it if you must. It would be nice if we’d have a popular Socket.io-like interface to do just that – to attempt first to use WebRTC’s data channel, then fallback to either a TURN relay for it or to WebSocket (and degrading from there further along the line of older transport technologies).

The closest I’ve seen to it is what AirConsole is doing. They enable a smartphone to become the gamepad of a browser. You get a smartphone and your PC connected so that whatever you do in the phone can be used to control what’s on the PC. Such a thing requires low latency, especially for gaming purposes; and WebRTC probably is the most suitable solution. But WebRTC isn’t always available to us, so AirConsole just falls back to other mechanisms.

While a gaming console is an obvious use case, and I did see it in more instances lately, I think there’s merit to such a generic framework in other instances as well.

Time someone implemented it

The post Where’s the Socket.io of WebRTC’s Data Channel? appeared first on BlogGeek.me.

ClueCon Weekly – October 28, 2015 – Brian West

FreeSWITCH - Fri, 10/30/2015 - 18:20

Links:

http://tldp.org/HOWTO/Traffic-Control-HOWTO/intro.html

Apple WebRTC Won’t Happen Soon

bloggeek - Thu, 10/29/2015 - 12:00

Don’t wait up for Apple to get you WebRTC in the near future.

Like many others, I’ve seen the minor twitter storm of our minuscule world of WebRTC. The one in which a screenshot of an official Apple job description had the word WebRTC on it. Amir Zmora does a good job of outlining what’s ahead of Apple with adding WebRTC. The thing he forgot to mention is when should we be expecting anything.

The below are generally guesses of mine. They are the roadmap I’d put for Apple if I were the one calling the shots.

When will we see an Apple WebRTC implementation?

Like anyone else, I am clueless to the inner workings of Apple. If the job postings tell us anything it is that Apple are just starting out. Based on my experience in implementations of media engines, the time it took Google, Mozilla and Microsoft to put a decent release out, I’d say:

We are at least 1 year away from a first, stable implementation

It takes time to implement WebRTC. And it needs to be done across a growing range of devices and hardware when it comes to the Apple ecosystem.

Where will we see an Apple WebRTC implementation?

Safari on Mac OS X. The next official release of it.

  • This one is the easiest to implement for with the least amount of headache and hardware variance
  • I am assuming iOS, iPhone and iPad get a lot more stress and focus in Apple, so getting something like WebRTC into them would be more challenging

The Safari browser on iPad and iPhone will come next. Appearing on iPhone 6 and onwards. Maybe iPhone 5, but I wouldn’t bet on it.

We will later see it supported in the iOS WebView support. Probably 9-12 months after the release of Safari on iOS.

The Apple TV would be left out of the WebRTC party. So will the Apple Watch.

Which Codecs will Apple be using?

H.264, AAC-ELD and G.711. Essentially, what they use in FaceTime with the addition of G.711 for interoperability.

  • Apple won’t care about media quality between Apple devices and the rest of the world, so doing Opus will be considered a waste of time – especially for a first release
  • H.264 and AAC-ELD is what you get in FaceTime today, so they just use it in WebRTC as well
  • G.711 will be added for good measures to get interoperability going
  • VP8 will be skipped. Microsoft is skipping it, and H.264 should be enough to support all browsers a year from now
Will they aim for ORTC or WebRTC APIs?

Apple sets its sights on Google. They now hold Microsoft as best-friends with the Office releasing on iOS.

On one hand, going with ORTC would be great:

  • Apple will interoperate with Microsoft Edge on the API and network level, with Chrome and Firefox on the network level only
  • Apple gets to poke a finger in Google’s eye

On the other hand, going with WebRTC might be better:

  • Safari tends to do any serious upgrades with new releases of the OS. Anything in-between is mostly security updates. This won’t work well with ORTC and will work better with WebRTC (WebRTC is expected to be finalized in a few months time – well ahead of the 1 year estimate I have for the Apple WebRTC implementation)
  • Microsoft Edge isn’t growing anywhere yet, so aligning with it instead of the majority of WebRTC enabled browsers might not make the impact that Apple can make (assuming they are serious about WebRTC and not just adding it as an afterthought)

Being adventurous, I’d go for ORTC if I were Apple. Vindictiveness goes a long way in decision making.

Extra

On launch day, I am sure that Bono will be available on stage with Tim Cook. They will promise a personal video call over WebRTC running in WebKit inside Safari to the first 10 people who stand in line in Australia to purchase the next iPhone.

And then again, I might be mistaken and tomorrow, WebRTC will be soft launched on the Mac. Just don’t build your strategy on it really happening.

 

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Apple WebRTC Won’t Happen Soon appeared first on BlogGeek.me.

Kamailio Advanced Training, Nov 30 – Dec 02, 2015, in Berlin

miconda - Thu, 10/29/2015 - 05:30
Next European edition of Kamailio Advanced Training will take place in Berlin, Germany, during November 30 – December 02, 2015.The content will be based on latest stable series of Kamailio 4.3.x, released in June 2015, the major version that brought a large set of new features, currently having the minor release number v4.3.3.The class in Berlin is organized by Asipto  and will be taught by Daniel-Constantin Mierla, co-founder and core developer of Kamailio SIP Server project.Read more details about the class and registration process at:

Kamailio Dispatcher Discovery Service with NodeJS and Etcd

miconda - Wed, 10/28/2015 - 22:11
An interesting resource for those relying on NodeJS for various needs and using Kamailio as load balancer in front of Asterisk or eventually other SIP systems (Freswitch, media servers, PSTN gateways, etc…):Practically, this tool can be run along side with Kamailio and each SIP system (e.g., Asterisk), usingEtcd as communication channel to publish what SIP systems are available. Based on this information, the tool instance next to Kamailio is generating the dispatcher.list and instructs Kamailio to reload that file.Etcd is a highly-available key value store for shared configuration and service discovery developed as part of CoreOS project.

ThinQ - Least Cost Routing in the Cloud - KazooCon 2015

2600hz - Wed, 10/28/2015 - 20:20

The team at ThinQ show how to set up your routing profile, carrier selection, high volume traffic management, and LCR routing.

SIPLABS - Hard Rocking Kazoo - KazooCon 2015

2600hz - Wed, 10/28/2015 - 20:19

Founder and CEO Mikhail Rodionov discusses all the projects and code contributions that they have built for Kazoo over the past year.

Voxter - Building Value with Kazoo - KazooCon 2015

2600hz - Wed, 10/28/2015 - 20:16

The Voxter team discuss code contributions to the Kazoo platform, discuss how they are utilizing the platform, and give an in-depth demo of WhApps.

Telnexus - Quote to Cash – KazooCon 2015

2600hz - Wed, 10/28/2015 - 20:12

Telnexus CEO Vernon Keenan discuss how he built the Managed Service Provider Telnexus from the ground up and the lessons he has learned in the process.

VirtualPBX - Back Office, Delivering Voice in a Competitive Market - KazooCon 2015

2600hz - Wed, 10/28/2015 - 20:04

In a competitive market, high quality voice services alone are rarely enough. Lon Baker speaks about the customer lifecycle, back office systems from Sales to CRM to deployment, and how to drive profitable growth while delivering an excellent customer experience.

Billing Data with Kazoo - KazooCon 2015

2600hz - Wed, 10/28/2015 - 20:00

Product Director Aaron Gunn discusses billing options for SaaS and IaaS customers. This includes CDR API, AMPQ, and integrating VoIP billing platforms.

Tuning Kazoo to 10,000 Handsets - KazooCon 2015

2600hz - Wed, 10/28/2015 - 19:58

People love to talk about scale. Some vendors pitch that their systems easily support 100,000 simultaneous calls, or 500 calls per second, etc. The reality is, in the real world, people’s behaviors vary and the feature sets they use can cut these numbers down quickly. For example, ask that same vendor claiming 100,000 simultaneous calls if it can be done while call recording, call statistics and other features are turned on at the same time, and you’ll usually get a very different, cautious, qualified response.

In this presentation, we’ll show you how to set up your infrastructure to support 100,000 simultaneous calls.

Detecting and Managing VoIP Fraud - KazooCon 2015

2600hz - Wed, 10/28/2015 - 19:45

This is an overview of VoIP fraud, different types of fraud and what telecommunication carriers are doing to combat this issue. Types of fraud include International / Premium Number Fraud, Impersonation / Social Engineering, Service Degradation / Denial of service. Presented by Mark Magnusson at KazooCon 2015.

The Next Wave - KazooCon 2015

2600hz - Wed, 10/28/2015 - 19:43

CTO Karl Anderson discusses the state of Kazoo. This includes integrations with FreeSWITCH, erlang, and Kamailio. Reseller milestones include the release of whitelabeling, webhooks, migration, carriers, debugging, account management and more.

IOT Messaging – Should we Head for the Cloud or P2P?

bloggeek - Tue, 10/27/2015 - 12:00

A clash of worlds.

With the gazillions of devices said to be part of the IOT world, how they interact and speak to each other is going to be important. When we talk about the Internet of Things, there are generally 3 network architectures that are available:

  • Star topology
  • P2P
  • Hubs and spokes
1# – Star Topology

The star topology means that each device gets connected to the data center – the cloud service. This is how most of our interent works today anyway – when you came to this website here, you got connected to my server and its hosting company to read this post. When you chat on Facebook, your messages goes through Facebook’s data centers. When your heat sensor has something to say… it will probably tell it to its server in the cloud.

Pros
  • We know how it works. We’ve been doing it for a long time now
  • Centralized management and control makes it easier to… manage and control
  • Devices can be left as stupid as can be
  • Data gets collected, processed and analyzed in a single place. This humongous amounts of data means we can derive and deduce more out of it (if we take the time to do so)
Cons
  • Privacy concerns. There’s a cloud server out there that knows everything and collects everything
  • Security. Assuming the server gets hacked… the whole network of devices gets compromised
  • As the number of devices grows and the amount of data collected grows – so do our costs to maintain this architecture and the cloud service
  • Latency. At times, we need to communicate across devices in the same network. Sending that information towards the cloud is wasteful and slower
2# – P2P

P2P means devices communication directly with each other. No need for mediation. The garage sensor needs to open the lights in the house and start the heating? Sure thing – it just tells them to do so. No need to go through the cloud.

Pros
  • Privacy. Data gets shared only by the devices that needs direct access to the data
  • Security. You need to hack more devices to gain access to more data, as there’s no central server
  • Low latency. When you communicate directly, the middleman isn’t going to waste your time
  • Scale. It is probably easier to scale, as the more devices out there doesn’t necessarily means most processing power required on any single device to handle the network load
Cons
  • Complicated management and control. How do these devices find each other? How do they know the language of one another? How the hell do you know what goes in your network?
  • There’s more research than real deployments here. It’s the wild west
  • Hard to build real smarts on top of it. With less data being aggregated and stored in a central location, how do you make sense and exploit big data analytics?
3# – Hubs and Spokes

As with all technology, there are middle ground alternatives. In this case, a hubs and spokes model. In most connected home initiatives today, here’s a hub device that sits somewhere in the house. For example, Samsung’s SmartThings revolves around a Hub, where all devices connect to it locally. While I am sure this hub connects to the cloud, it could send less or more data to the cloud, based on whatever Samsung decided to do with it. It serves as a gateway to the home devices that reduces the load from the cloud service and makes it easier to develop  and communicate locally across home devices.

Pros
  • Most of what we’d say is advantageous for P2P works here as well
  • Manageability and familiarity of this model is also an added bonus of this model
Cons
  • Single point of failure. Usually, you won’t build high availability and redundancy for a home hub device. If that device dies…
  • Who’s hub will you acquire? What can you connect to it? Does that means you commit to a specific vendor? A hub might be harder to replace than a cloud service
  • An additional device is one more thing we need to deal with in our system. Another moving part
But there’s more

In the recent Kranky Geek event, Tim Panton, our magician, decided to show how WebRTC’s data channel can be used to couple devices using a duckling protocol. To keep things short, he showed how a device you just purchased can be hooked up to your phone and make that phone the only way to control and access the purchased device.

You can watch the video below – it is definitely interesting.

To me this means that:

  1. We don’t discuss enough the network architectures and topologies that are necessary to make IOT a reality
  2. The result will be hybrid in nature, though I can’t say where will it lead us

 

Kranky and I are planning the next Kranky Geek - Q1 2016. Interested in speaking? Just ping me through my contact page.

The post IOT Messaging – Should we Head for the Cloud or P2P? appeared first on BlogGeek.me.

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.