Interview with Dr. Varun Singh, Founder and CEO of CallStats.io

0
1106

Dr. Varun Singh is the Founder and CEO of callstats.io, a startup which analyses and optimizes the Quality of multimedia in real-time communication. callstats.io‘s vision it to make real-time communications both frictionless and effortless to set up, operate, and scale. He received his Ph.D. degree from Aalto University, Finland, in 2015. Over the last 8 years, Varun has made several important contributions to different standardization organization: 3GPP (2008 – 2010), IETF (since 2010), and W3C (since 2014). He is the co-author of the WebRTC Statistics API and several enhancements to the RTP protocol.

MP3 download

Transcript

Dan:
Hi, Varun. How are you doing?

Varun Singh:
Hi, Dan. How are you doing? I’m doing well.

Dan:
Great. I was thinking back to the first time I met you and it was several years ago at an IETF meeting. I think you were still a student then working on congestion control. Before we get into talking about CallStats, I would like to hear a little bit about your background, particularly as it relates to this space, but also in general. How did you get to this point from there?

Varun Singh:
Sure, Dan. Thanks for asking the question. I do remember, it was probably Prague 2011 or something similar when we met, maybe the first WebRTC IETF meeting. Yes, thank you for asking those questions. The road indeed has been long, I would say.

I started in computer science when I was a teenager and I used to write copy-code. At some point when I was at the university, I started writing with networks and embedded systems. Back in the early 2000s, after the first dot-com bubble, I think there was a lot of emphasis towards hardware-oriented systems, because people were wary of software systems, especially since they had already collapsed once.
Initially, when I graduated with a bachelor’s, I started looking for a job and I found one. I found my first job at LG Electronics and I did a six months internship/junior programmer job there, working on some protocols and on synchronization of devices and migration, and then moved onto a company called STMicroelectronics. That’s where my love for multimedia began.

I started there as the device driver script, writing device drivers for cameras on mobile phones. This was 2004, so the cameras were still VGA doing not more than five, seven, ten seconds of video recording. Remember those days where you took a video and it would stop after ten seconds, and then you’d take another video? That was basically the first product that I worked on. The ten seconds was just arbitrary, if I remember correctly. We just came up with that because we had to draw a line somewhere. Ten was as good a number as 20, for example. So, I did that, and I slowly moved up the stack because the cameras were already at 5 megapixels back in 2004. It took a long time before they actually rolled out commercially, because the cost and the marketing and everything. It takes time to figure out what people want and should they roll out with a 5 megapixel camera now or in two years, or so on and so forth.

Dan:
The technology was far advanced beyond what the market was really ready for?

Varun Singh:
Right, and there was not so many use cases for a 5-megapixel camera. This was 2004-2005. People are still figuring out … Flickr was the big thing at that time and YouTube had not yet arrived. After two years of working with them, I kind of got bored because there are only so many megapixels. So, you go from 1 megapixel to 5. At some point, it was just changing a number. I decided that I would take … I was looking for new jobs and I was working in Europe at that time.
One of the things I found was that no one hires a bachelor student in Europe, because it’s one of these old school continents where you have to have a Master’s. Everyone, my boss included, had Master’s and PhD. So, I applied to a bunch of schools both in the U.S. and Europe. Then I took this one in Finland, Helsinki because it was free: basically, everything covered, except for day-to-day activities. I had, let’s say, a research position so that would cover for my day to day. So, basically fully covered education, so I moved there.

I met Jorg Ott, who’s fairly well-known in the IETF, and I started working with him in 2006. That’s how I got into both congestion control and multimedia. Since 2006 I’ve been working on that. So, by the time I met you, and WebRTC came along, I kind of knew what to do with how to make video better and I was looking for companies and applications that would make video mainstream. That actually happened as an outcome of WebRTC initiative because companies started rolling out more videos. There’s already Instagram at that time, Facebook pictures was a big thing. Photos were on the rise again on mobile devices.
The whole mobile filtering thing came along in the last … in the intervening years between then and now. Snapchat became a camera company, shedding its old privacy, like pictures timing out in ten seconds thing and video became mainstream by itself. Today, video is mainstream both because of Netflix and YouTube. My journey was basically a programmer learning, going from low level to high level, going up the stack, and then going into research, figuring out what the next things would be.

Finally, in 2012, I was kind of done with the research and started looking for new opportunities. WebRTC was on the rise and I hoped actually at that time that it would be mainstream, globally available and everything. Hundreds of companies, by 2014, which it had taken some time. 2013, I started thinking about what I could do next. Of course, one way was to build a company leveraging video. Since I came from a research background I wanted to do something along those lines. That’s my story, just going from programmer to researcher, then trying to figure out what to do next.

Dan:
But always connected to video in some way?

Varun Singh:
Yeah. Since 2004, connected to video or multimedia in some way. Before that, it was basically embedded systems.

Dan:
Okay. How did you get to CallStats? You’re talking about video, in general, but CallStats is a little different than that. What I’m really curious about is sort of what really convinced you that the world needed this?

Varun Singh:
The interesting transition happened when I started building the first WebRTC kind of product. This was 2012-13, being at the university, building products for students and teachers. It was also the period when the MOOCs, the massive online open courseware, was becoming big. People were doing courses online, most of these lectures were recorded, and then they were answering questions over weeks, right? At the time, what we could do in this emerging space where people would do these things, like these courses online, but they would still need to do group work in some form. So, like doing coursework by yourself is not that interesting and there is big drop off rate on these things because people just lose interest over time.
We started building a product around that, something that would be like chat, but then you would have video and you would have some sharing, but basically using all the Web APIs at that time: WebRTC, Web sharing, and so on and so forth. What we continuously ran into problems with was just bad quality, like people not being able to connect and we’re trying to diagnose it. We started building something like CallStats, internally, probably because we said, “I know the stats. I know congestion control. We know how to measure this. That’s all my research.” We can measure good or bad, and so on and so forth, so, we start doing this in 2013.

In 2012-2013, we were doing that. In the summer of 2013 people heard that we were doing this app, and some other people reached out in the regions, in the Nordics asking, “Hey, I heard you’re building this thing. We’re facing these issues with quality. Are you experiencing them as well?” I said, “Yeah, we are indeed and we are measuring it,” and things like that. We showed them a demo of what we were doing. They were like, “Can we have that? Could you open source it?”

Dan:
Yeah. That’s the piece we want, right?

Varun Singh:
Yes. That was the point when I said, “Look, two people are asking me this and these guys have no idea about WebRTC or any of it.” They were just using a platform and just rolling in. There was this one app which was doing healthcare. Both were healthcare products. Web engineers doing healthcare product is very far from any background in RTC or real-time communications. We started talking to them. In 2014 summer, I basically decided to drop the whole product that we were building and just move over to this new thing.

That also meant that I had to change the team vastly. I had a different team which was more Web-oriented. So, I had to let them go and basically redo the company as a new company. I got new co-founders. The co-founders that I got were Marcin and Shaohong. They were both working with me as researchers, as master’s students, one of whom I had guided through his master thesis on congestion control. It was easy to hire people who actually knew the subject matter and I would not have to train them or anything. They basically were onboard immediately. They understood what we needed to do.

We basically took the product to the first WebRTC summit, or whatever it was called, expo, in San Francisco later that year. We just had discovered up, so we had to separate our system from this CallStats.io thing. We still had the demo, which was the teaching app, which is now like the demo app for the CallStats. We basically went to San Francisco or San Jose at that time and showed it to a bunch of people there. We got really good positive feedback. People said, how could they get an account, and so on and so forth.

Basically, in February 2014, we had separated the CallStats.io code base completely from the rest of the product and basically built a standalone data login system, which is the first thing you need to get people on board, and off we went. Most of 2014 was taking the initial feedback and actually creating the product with various people, not customers, per se, because there were a lot of people who want to try it. In that process we were trying to figure out if there was money. No one was willing to pay at that point, but this is very early days. So, I understand you don’t want to pay now, but what would you pay later. There was business testing, trying to figure out what would work, what our cost would be, and so on and so forth. Basically, 2014 was a bootstrap company, three people trying to figure out, and one of the problems that we ran into was non-WebRTC oriented at all. The team was all WebRTC oriented, but we were now in a distributed system problem, because you’re building analytics platform that will be globally available, because the first people that we talked to were the U.S. and Europe and China. We knew that we needed to be global from day one. We could not build something that would just work out of a U.S. data center, and you had the EU, U.S. safe harbor laws, and basically have to really understand things fairly quickly because now we were a data company or an analytics company and not so much as a WebRTC company.
That’s why it got us, the initial feedback and people actually presenting a problem, formulated as a problem that they were encountering. It was easy. We had the know-how and the solution that fit the problems that people were facing and commercialized it.

Dan:
It’s interesting that you mentioned that this is more of an analytics platform now. Did you get questions about that at the time? Like how do you compare to other analytics companies, or whatever? Because I’m sure that as people tried to understand why they needed what you had to provide, that was one of the common questions.

Varun Singh:
Indeed. I think both of the companies that we were speaking to did not have WebRTC or think that we do analytics as per se, but they had a lot of analytics too, right? They had websites which are powered by Google Analytics, Mixpanel or similar, or they have their own BI codes internally. So, they’re not running their company. They are collecting data themselves. Like subscribers, if you’re talking about ISPs, they have subscriber data, they have billing data, they have Google. They crunch these numbers. Some of these are big analytics companies, like Splunk or small ones like Keen. There are so many of them. If you go and look for one, you will find one. You elect for your infrastructure monitoring.

When we were talking to big companies, they were always like, “We are already paying tens of thousands of dollars for a month (or a year) to do all these things.” We always got into the question, “Cool. You can do WebRTC monitoring. You have quality and all the statistics and you have your own dashboard. But what if you could export that data out into these other dashboards that we have?” There was actually a lot of initial feedback and questions around those.

However, when you’re building a company, you pick the battles you want to win. In our case it was being able to build a compelling analytics platform where, to-date, we still get questions about if you can export the data. It’s on our roadmap, but the compelling data for a product is the fact that we’re able to measure things, and as long as we’re able to measure things accurately that’s our emphasis.

WebRTC is a moving standard. Things change week on week or month on month. We’ve been hesitating to actually export the data out because then you have the problem that you’ve exported bad data or inaccurate data, because systems will change. Then people have this legacy data and do not know how to clean it up. In our case, we code for a given fact, like that we have tons of data over the last four years. We can go back and correct it. We can nickname them if the JSON values have changed, adapt them if things were divided by 1,000 or multiplied by 1,000.

Dan:
When the semantics change in some way then?

Varun Singh:
Right. We still control the data. That was one of the stumbling blocks that we faced, initially, but as the product grew in importance and in features, that’s still something that’s asked, but it’s not one of those stumbling blocks that we face. Companies love the product. They like the fact that we’re able to measure quality as a function of both time and people, and as a service. So, both within a call, people within a call, as an aggregate of calls on the day, and so on and so forth, the job performance.

There are always things that we can do better, but that’s something that we’re still working on. WebRTC is nascent., it’s like 1,000, 2,000 companies using it. As the industry grows to maybe 10, 20,000 companies, that’s when I feel that most of the code base and the standards can be stabilized. Standards have stabilized, but I think implementations have not. When implementations stabilize and they converge, I think you’ll see more of these products take off, and more on our side, I think, we’ll have more compelling features, because today we spend a lot of time trying to figure out what broke yesterday.

Dan:
That’s a value too, of course, right? Because to try to do that yourself, I’m sure, is quite challenging to keep up with.

Varun Singh:
Right. Things break and we detect it automatically. That’s one aspect of it. Of course, it will always be cool to know who’s fault, like is it the code bases, because you see negotiation failures and it’s sometimes not clear if it’s the code base or it’s Chrome changing something.

It’s something that developers need to go and look by themselves. Sometimes those things happen simultaneously, like you roll out something and Chrome updates itself or Firefox updates itself. Then you have the problem that you have these three things that could have changed and you don’t know where exactly the problem is. There’s a fair amount of hunting. Still, of course, the product highlights it and shows you how big that problem is. Is it one user or is it 10% of your user base suffering and hence it becomes pertinent for you to solve the problem?

Dan:
As you say, you’ve got plenty more to do, plenty more exciting things coming up. But, I’m curious, are you happy with how things have turned out up to this point? What I mean is, you started with the goal really of measuring and you’ve added more on to that. Do you think you’re doing a good job measuring now? I’m just curious. In that need, the need that prompted someone initially to say, “Hey, I want that tool,” how well do you feel that’s gone, the process of meeting that particular need?

Varun Singh:
I think the demand is still there in the sense that people still come up to us and say, “Hey, we are failing in the TURN. We don’t have sufficient tooling,” or “We do have tooling, but the tooling is breaking. So, we’d love to give it a try.” Then their first experience of CallStats is always amazing. Especially when you don’t have much, it can be really cool because you suddenly get so much visibility that you didn’t have in the past or you had it at a smaller scale, like many were having only tens of calls, and then because the system grew, your monitoring system kept alarming and it’s not the main thing. So, it basically lags behind. Then you don’t have that much visibility.

Yeah, I think we do feel the excitement of the end user even today when they come on board. Of course, the needs are all-the-time changing. Initially, the product was developed by developers such as us for developers to figure out what the problems were. Subsequently, the roles that use CallStats.io today have increased. There are product managers which use it to figure out a particular version in the past compared to whatever they have today, how they’re performing, and N minus one, is the performance better. Performance is anything that they think they were measuring. So, it’s a churn, month-to-month active users, minutes, whatever.

Since we’re tracking all these metrics, they get to compare and contrast. There can be versions of the app. There can be versions in geography like, “Is my app in South America doing the same as it is doing in Asia?” Especially, you can track it. For example, there’s a company that launched in, let’s say, Asia first and then they moved to South America and launched the product there, then they want to track the same metrics, but shift the period of time, because they’re now launching and they want to see for the region if it’s the same as they did in Asia, or it’s faster, or it’s slower, and how could they change their marketing message because of that. A lot of that is changing. One of the things that we found interimly was customer support. For example, we’re on Hangouts or a similar product and if I’m paying for this interaction, maybe I’m a doctor or a patient, and you’re the doctor and they’re paying for the interaction and the call is bad, then I would need to complain as a patient because I would not want to pay, but you would still want it paid, right, as a doctor, you spend your time. If it’s still a no-show or a bad experience, still you put in the time, ends up to the service operator to figure out or reconcile this. In those cases where real money is involved, if they’re going to do a cashback or cashback to the end user, like the patient and pay the doctor, they need to know how they can mitigate this fairly quickly. They don’t want to run into the same problem again and again, or they would put a disclaimer or close the call earlier if they know within the first 30 seconds, and they say, “This call is going to be bad, so if you both continue, it’s up to you, it’s free” or whatever. Those are the things that they want to do.

Dan:
The point is live, right? This is runtime they want to be able to do this.

Varun Singh:
Runtime, yes. If the doctor and the patient know that the call will be cut because of technical issues, for example, and then the payout is also smaller, because you could put the doctor onto a different call. He doesn’t care if he had to talk to this person or another. He just wants to make sure that the patient that he’s talking to, he’s able to give a diagnosis and get the next one, send them to the hospital if they need be.

We build out something called Automatic Insights which basically converts our knowledge into plain text, into English. Basically, we identify parts of the call which are bad and then we go back and say like, “Hey, this call was over TCP. So, you’ll see all this erratic behavior because you’re behind a firewall. That’s why there needs to be” yada, yada, yada. If it comes to a customer support who’s not completely engineering or entrepreneur but can understand the jargon and the customer is complaining, he can put the two together, and without talking to engineering, he can basically respond to the end users.
It basically reduces a lot of the time on support, which improves your metrics from a customer support, customer success point of view, but also frees up your engineering from being disturbed. iIt does not distract them from whatever they’re going to do that day. At least, in 70, 80% of the cases, this is good enough, because the end users will say, “I had … the voice kept dropping.” Then you say the Wi-Fi that you were on or the other person’s Wi-Fi was the reason why we were seeing packet loss. That’s good enough. They will say, “Will it happen next time I’m calling them?” They’re like, “Yeah, maybe.”

Dan:
Actually, in an earlier part of this video, it kind of fuzzed out for a bit and it’s cleaned up again. I was wondering, “I wonder where the problem is? Is it here, there, whatever?” This is an app where I don’t have that information, so I definitely can see how nice that would be to have.

Varun Singh:
They could easily have markers on the video which would say, “Hey, it’s the upstream or the downstream or wherever the problem occurred.”

Dan:
Right. You’ve done a lot with your product. You talked about a lot of the changes that it’s gone through and moving from measuring to more than that, including your knowledge and wisdom and so on, and making it easier for customers. Also, you’ve provided a lot of good presentations, publicly, and reports, articles to the community on what you’re finding for WebRTC usage. What have you learned about the challenge of building tools for WebRTC apps, though? For you, the process of you actually trying to make a support tool like this, what things surprised you?

Varun Singh:
I think when we started and I think because we had an analytics tool, still trying to help customers figure out what the problems are, but we’re not an infrastructure company, per se, so a lot of the problems can be as part of their infrastructure and how they’ve rolled out their service. One of the things that I remember anecdotally in this product, so it’s an anecdotal reference, and I’ve spoken about it at earlier events, but it might be pertinent here again, is this company that launched their product to the K-12 age group, so teenagers in high school, for example. They did it about April-May last year.

The product rolled out, everything was seamless, and so on and so forth. People were using it. Traction was there. Then in, I think, August, they ran into this problem that calls are failing, things were breaking and alarms were going off. You have alerts that tell them that calls are failing. There’s churns, so on and so forth. Suddenly, everything was working. They reached out to us and they said, “Hey, you’ve not changed anything in our app, right? You did not roll out anything? What’s happening today? Is it a problem at your end?” I said, “No, I don’t see a problem because none of our other customers are seeing this behavior that’s intrinsically part of your thing.” So, we sat down, we looked at the stuff, and we looked at the failures, failures on that, and certainly not related. We were like, “Hey, what you’re seeing were not failures.” Their initial thinking was that Chrome basically changed something or Firefox changed something, which was basically giving them false positives. The user started complaining as well simultaneously that they were not able to do stuff from it. What we quickly realized was the behavior had changed partly because the kids over summer were at home, and their home Wi-Fis and summer camp directors, or wherever they were, they were not really intrusive. When they went to school, the school’s Wi-Fi or LAN service, they’re not really impressive. They have really good firewalls and some of the problems they were facing were partly because the usage scenarios suddenly changed. The behavior changed. While we were talking to them, things even became better because people went out of school in the afternoon. So, things are again back to normal. While we were debugging their servers, we realized that this company needs to roll out more infrastructure, so to speak. Not only more infrastructure, but new types of TURN servers with new configurations and things like that.

Basically, the learning from that was to build out a better checklist on our side to investigate some of these problems automatically in service, that you have more TURN-related issues and your calls are failing as a function of time, automatically, because you are now having so many TURN or firewall-related issues and your TURN is still underused. So, maybe there’s something that we need to look at. Basically, some part of that product was built out.

I think what’s missing still is best current practices kind of thing, where companies which would roll out a product would actually go to a checklist. I think there are so many Stack Overflow stories as well, where someone says, “I’m not able to connect,” and someone says, “Put it on server,” and says, “I put it on server. It’s already there.” Then someone says, “Did you configure it for TCP?” Then they say, “No.” Then they go on. That’s one of those areas where people are struggling. I know a lot of people who are in the enterprise-related markets. They have built products where they use CallStats as part of their site discovery. They have probes or things that sit inside and they do routine calls from the inside to the outside before they start applying their product or their ideas.

Dan:
Okay. It’s just a continuous quality check, essentially?

Varun Singh:
Right. This is before they deploy, because it’s like a site reconnaissance thing. They want to make sure that they figure out all the scenarios. They put this probe for a week or something so they see the pattern. They put it in different parts, so they can put five or ten probes, and basically go to the sys admin or a network op admin and say, “How many subnets do you have? Can you put one in each of them? Let’s see what happens.” Once they know that, then they roll out their product into that enterprise. They’re much more aware of what should happen. So, going through VPNs and things, so those boxes actually behave exactly like end user devices.

That’s something that some companies do. Not all companies do it, of course, because they don’t know if they’ll be using an enterprise or not. I think there’s tooling around that that needs a bit improvement. There are a bunch of companies trying to solve that aspect of it. There’s a lot, I think, in terms of the fact that Chrome and Firefox and all the browsers change every six weeks.

Dan:
Yeah. That’s kind of what I’m asking about, is what was it that was tough for you? What would have made your job easier, essentially? Obviously, if the browsers weren’t changing things regularly  …

Varun Singh:
Right. I think products need to evolve and I think even once the spec is stable and the browsers are compliant, they will need to improve. That’s how they distinguish themselves and that’s also how we pretty simply improve, like adding some things to it, seeing how the performance goes. One of the things that we’ve tried to build internally and we did well with Chrome and Firefox, but now new browsers are coming, hopefully, Safari and there’s already Edge, being able to continuously test them automatically.

That’s one aspect we put in some effort, but that’s not our core thing. We’re lagging a bit behind on that. We were doing a very good job, I think, until last summer on that, keeping it up. It’s still something that we have not put in that much effort on that right now over the last few months. That’s still something that we are looking to do. For us, it would be cool if some other person in the ecosystem would take responsibility and run those things, and we could just look it up.

Dan:
Right. Because that’s what you care about. You just want to know whether something broke there and how.

Varun Singh:
Then if we have that information, that could easily be an offering that we could provide if that were a global need. I think this is a thing that we face and I think a lot of people face. I don’t know how big that problem is. That’s I think usually the case with everything. It’s a manipular service. You want to know if it’s scalable, if it’s reliable, if security or the privacy is not broken in any form. The last few bits, I think, generally are about ops and LR thing. Some of those aspects we take into account at CallStats as a partner. We detect, we diagnose, and in some small cases we are deploying fixes. So, the dashboard surfacing them there, by LR thing which we then use as autoscaling. So, they start to see more failures, they start to scale up. There are things that we do like that. Then there are of course security aspects. That we don’t cater too so much, but they are tools that people might be curious about.

Dan:
All right. What’s next for you? Is there anything you’re excited about or is there something else big that you think needs addressing? I’m thinking something that keeps you up at night, either because you’re excited or you’re worried that, “This has got to happen”?

Varun Singh:
Yeah. I think one of the things that keeps me up slightly is the fact that we hope that the browsers basically converge on something soon, sooner than later, although each company has their own roadmaps. I believe that they’re making steady progress. Sometimes it’s not in the same direction. Some browsers do something and the other browsers do something, and then because they don’t have a good overlap, we’re actually still doing the same thing that we were doing before. So I’m actually very excited about the object model coming in and I think that’s really a good step forward both for us and for the community at large, even though most people are going to use wrappers around this, but the way the wrappers would work would be more deterministic, would highlight more. Just because they’re objects, it gives us more surface to probe and measure things more accurately. When you have a blob, you’re trying to assess the internal state of a blob by looking at … probing from different points and then guessing.

Some of that guesswork is going away because of the object model. I’m hoping that even though we’re kind of done with the spec now with the 1/O, but whatever comes next, I hope that they take the object models more, open up more of the surface for control. There’s a lot of getters and then a lot of setting up the pipeline, but after you set up the pipeline, there’s not much that you can do in terms of transform it or mutate it.
You can always of course break it and attach it somewhere else, but those objects are not malleable enough, I think. If we have a lot more control over it, then it would change; more from a geek point of view, people who are really into WebRTC. From the high level, I think it would not change anything, for companies that opt for WebRTC, I think from their perspective.

I think it’s already ready for most cases. It’s just us take one more control in different scenarios, are looking for more opportunity. I think the biggest thing that keeps me excited these days is the fact that multimedia is on the rise and more companies are using it or more companies are developing products around it, and still learn about new companies on a day-to-day basis. The problems that they’re facing is changing as well.

We’re hoping that some of the enhancements that we make later this year, end of summer or early fall, are going to address some of those situations. I’d love to talk to you about them, but unfortunately I can only leave you with this ambiguous statement saying that we would have something towards the end of the year. I’m super excited that if we deliver on that, I think it’s going to change a lot of how people perceive the company and how they perceive WebRTC as well.

Dan:
Wow. That sounds exciting. I look forward to hearing about that when I’m talking with you again.

Varun Singh:
I’m super stoked about it.

Dan:
Cool. Is there anything else you’d like to tell us about today, maybe not that, but …

Varun Singh:
I think some of the things that have come about in the last three, four, six months in CallStats.io is that more of the features are more enterprise-related. We’ve gone the route where like we spend a little bit more time on controlling the systems. If you are enterprise customer, you have lots of people who would want to have access to the system, privacy. We’ve done a lot more on user management, which is again not WebRTC, but analytics-related. We’ve become more mature because of that. We have things like tools which help customer support people. Then, for example, you can have BI exposed to them, but not the developers who do not necessarily need personal information to diagnose the call. We’ve built in a lot more tooling and I’m super excited about that aspect to see more enterprise customers, our bigger customers use those aspects of the product.

Dan:
Right, a more serious, more mature product, as you say.

Varun Singh:
Right.

Dan:
Okay. Excellent. Thank you so much for your time.

Varun Singh:
Thank you, Dan.

Dan:
All right. Look forward to talking with you again.

Varun Singh:
Cheers.

SHARE
http://www.allthingsrtc.org/wp-content/uploads/2017/05/webrtc-icon.png
Dr. Daniel Burnett has a history of almost 2 decades of experience with Web and Internet standards dealing with communications, having co-authored VoiceXML, MRCPv2, WebRTC, and many other standards. In creating AllThingsRTC, Dan aims to provide the innovators in the real-time communications space a forum for explaining the topics that really matter.