Services of DHN have been discontinued for an indefinite time period.
0

“DNSSEC Will Become a Standard Part of Any Offering Over time,” Dr. Burt Kaliski, Verisign

Change has been  constant in almost every facet of life throughout the past, and technology industry is no exception.

There  is no  uniformity in technology; it keeps on developing, improving, re-inventing itself, and  in the process also changing the way it diffuses across the society.

It’s always a pleasant sight to have people and organizations that support, facilitate and ensure adoption of these changes. If not for them, we’d still be stuck in the world of dial-up internet, huge computers, and rotary dial telephones. Heck, we wouldn’t even have been able to  reach that world itself.

And Verisign is one such organization. Through its efforts to ensure operational deployment of  DANE, DNNSEC , IPv6 and many more protocols/products that seek to replace the traditional systems in place today, Verisign strives to build a better and stronger Internet.

We recently had an opportunity to interact with Dr. Burt Kaliski Jr, Senior Vice President and Chief Technology Officer, Verisign at WHD. India 2013 and he talked at great length about some of Verisign’s such initiatives.  Some highlights of our  session with him are below, and a print version of the whole interaction follows  it.

IPV6 is a complete breakthrough, because it has  4 times as many bits, and that’s an enormous exponential increase in the number of possible addresses. There is no foreseeable period in which IPv6 addresses would run out. In fact, IPv6 makes it possible to give out unique addresses for everything at every point in time.
– Dr. Burt Kaliski Jr, Senior Vice President and Chief Technology Officer, Verisign.

Dr. Burt Kaliski Jr, SVP and CTO, Verisign

Dr. Burt Kaliski Jr, SVP and CTO, Verisign

Q: Before we begin, please tell our readers about your journey from RSA Laboratories to Verisign.

A: RSA laboratories was the place where I started my career  in security after getting a PhD. While I was   there  back in the startup days, that’s when Verisign spun out  of RSA to offer certification services.

I stayed with RSA well into my career and eventually moved into EMC Corporation after it acquired RSA. But then two years ago, I took an opportunity to move back to Verisign, which I had been following all along.  In a way, it was like returning back  where I started.

Q: What according to you are some of the major flaws in the modus-operandi of X.509 – CA model currently in place that seriously jeopardize the Internet users’ security?

A: The X.509 certificate authority model has been around since the 1980s and it’s the basis for electronic commerce sites; we have been using  it for a number of years. It’s a good model in many respects, but, as in a number of systems, there can be too much of a good thing. And in the case of the X.509 certificate authority model, there are too many certificate authorities, all of which , in many settings, are treated the same. That means that a compromise on any one of the certificate authorities could lead to an attack on the system.  What we’ve looked at as a security industry , are the  ways to mitigate that compromise, so that you can get  all the benefits of X.509 – CA model , but  with  some checks and balances in place that can prevent attacks from occurring.

Q: What is DNS-based Authentication of Named Entities, and how does DANE protocol successfully deploy DNSSEC to thwart MitM cases that are rife in the CA model?

A: Let’s start with DNSSEC. The security extensions for DNS were developed to provide additional assurance above and beyond the relationship that the parties might have when they are exchanging the DNS information,  and that additional assurance comes in the form of a digital signature. This means that the DNS, in addition to returning the IP address associated with a given domain name, will also return a digital signature on that information, so that a relying party can confirm that the correct information was presented, even if that relying party wasn’t directly interacting with DNS.

DANE, the DNS-based Authentication of Named Entities protocol,  takes this step further and says, if we can get this additional assurance for IP addresses, why not  get additional assurance for other information associated with a domain name. In particular, you can have this assurance provided as a check and balance for information that otherwise is prepared by certificate authorities.

So as I mentioned, there can be potential attacks because of too many certificate authorities. A counter measure to those attacks, is for the owner of a domain name  to say exactly which certificate authority, the very one CA, it intends to work with, and then if there were any compromises on any of the other ones, those would not be able to undermine the security of the domain name owner.

Q: Since DANE needs DNS records to be signed with DNSSEC, isn’t DNSSEC validation a major issue that heavily limits DANE’s use?

A: Applications and services often will evolve in tandem. DNSSEC capabilities are built into nameservers starting at the root, moving through top level domains like .com and .net operated by Verisign, and then moving into the next levels. Some records are already signed and so they can be validated if a relying party requests it.  But you don’t need to validate everything or sign everything in order to add security for a particular set of records. If there is some application that needs the extra assurances provided by DANE (establishing a secure connection with a web server for banking transactions or enabling secure email), that application can stand by itself. So you don’t need everyone to accept DNSSEC in order to have a greater security and the use of DANE within your own application.

Q: How do you see the future of DNSSEC in the Internet security space?
A: I think we will continue to rely on DNSSEC as a building block. It will become a standard part of any offering. As the new generations of nameservers, recursive nameservers, applications, relying parties and so on are developed, they’ll build a better foundation because the technique is available. So DNSSEC will gradually become a commonplace.

There will be certain applications that will drive its demand faster than others, and  think those are the ones that will have the additional value from what it will effectively  become – a global distributed directory of signed information.

Q: How can Web Hosting providers, ISPs, Hardware vendors and Software developers each play their part in supporting DNSSEC?

A: If you are a hosting provider, you want to differentiate your services by offering DNSSEC for some or perhaps all of your customers. That means as a hosting provider, you want a nameserver that has DNSSEC capabilities or you outsource to someone else that has those capabilities for you.

If you are an application developer preparing a browser, an operating system or a mobile client, then you want validation (of the DNS information that comes back either doing it locally or relying on a recursive nameserver that does it for you and presents confirmation that calculation is complete) to be an option in your implementation.

So each party  has the options of putting these services in place. But the real key is to put them in place where they make a difference. If there is a particular application that benefits from this distributed global directory of signed information, that’s the place to put most of the emphasis at first and then you can pull the other parts along.

Q: Moving on, the recently published technical report by Verisign, titled “New gTLD Security and Stability Considerations” warns that addition of hundreds of new gTLDs over the next year could perhaps destabilize global operations of the DNS, along with other significant consequences. Can you highlight main areas of focus in the report and some potential problems/issues that you think need to be timely resolved?

A: Earlier in 2013, Verisign published a research report outlining some of the concerns that we have on security stability and reliability as new generic top level domains are introduced.

Now we have observed the operation in the gradual pace of growth for generic top level domains and the country code top level domains, but  the addition of so many new gTLDs is unprecedented.  It’s a huge multiplier of the use of the root servers with different kinds of usage  patterns that may not have been anticipated previously.

We do commend ICANN for its commitment to ensuring security, stability and reliability of the root servers and the internet in general as the new gTLDs are introduced, which is  why we have raised the concerns.

Some of the high points of these concerns: One is that the rapid pace of change for the root servers, by effectively adding an order of magnitude to the number of objects and perhaps even more to the amount of traffic,  needs to be measured carefully. There is no one root server. There are  in fact 13 different root servers by design with multiple independent operators. So to  have a full picture of the impact, it’s important to have the right measurements in place. The reason that these measurements are important  is that the root servers are not always used in the way you might expect them .  In fact,  we have seen that 10% of the traffic to the root servers is coming from generic top level domains that actually don’t exist. These requests are coming from throughout the internet to resolve things like .corp or .local, which are built-in to applications but are not generic top level domains.

So it’s important to understand the impact of this set of requests – which represents applications throughout the internet   that assume that these gTLDs can be reserved for their own local use.

And that’s where the stability, security and reliability issues come in – If these applications are assuming that some generic top level domains have not been delegated, what happens when they are? How would we measure and see the impact? Could that compromise security? Could that cause systems to fail? That’s the area  we ‘d  like to have more study on.

Q: Do you personally think that new-gTLDs will have as significant impact on the domain industry as it is touted to be? Because new-gTLD launches of the past like Biz, Info, Travel, Mobi, etc. failed have to marginalize .COM’s dominant position.

A: The gTLD program which Verisign participates in a number of ways, is another way to give more choice to the users  and the owners of the resources who’re looking for better ways to connect to each other, different ways of describing the servers that they’re present on the internet, different languages, different characters sets etc.,  because these are all that’ll make the internet easier to use and more accessible.

The objectives of the new gTLDs are very significant. I don’t know what can happen as these gTLDs progress or comment on any specific gTLD in particular, because in any area of innovation, industry learns over a period of time. But we do expect that the  established domain names, net  and .com in particular, will continue to be relied on  for a long time to come.

Q: This one is regarding another one of Verisign’s initiatives. How serious is the IPv4 address shortage problem? Also, can you tell how IPv6 resolves the problems associated with IPv4?

A: IPv4 is a great example of unexpected success. When the internet first started, everything was so small that it was thought that 32 bits worth of address would certainly  be enough for the stage of the experiment they were working on at that time.  And it has been enough to take us till just recently, when the last block of IPv4 address was allocated.

Now, over the years, the internet community has found ways of using that same set of IPv4 addresses as effectively as possible with all  kinds of sharing, reuse, mappings, translations  etc. And that can continue,  depending on what application you are trying to build, maybe for  a few years or  maybe even longer. But eventually, it  becomes too difficult to keep putting all this patchwork in place on a set of addresses that has run out. You can imagine the same happening in other domains as well. If you run out of mobile phone numbers,  you need to put in new area codes.

So, IPV6 is a complete breakthrough, because it has  4 times as many bits, and that’s an enormous exponential increase in the number of possible addresses. There is no foreseeable period in which IPv6 addresses would run out. In fact, IPv6 makes it possible to give out unique addresses for everything at every point in time. And the protocols and the  parallel stacks of implementations are already being rolled out. Last year, there was an IPV6 day, where everyone who was participating enabled IPV6    so that you could reach their websites  using the IPV6 protocol.

I think we will see co-existence for a period of time because the existing IPv4 systems are already working. But in new applications, especially in the mobile internet, we will drive the use of IPV6 and then pull all the rest along.

Q: To wrap up, what developments can we expect from Verisign labs in Q3 &Q4 of 2013?

A: Well, at Verisign labs, we are looking at the next generation of  protocols and architectures for DNS and the way that it’s used. We have been active in promoting the DANE in DNSSEC for a period of time and I think that people can expect to see more of that.

We have also been looking closely at the security, stability and the impact of new gTLDs and we would likely  have more to say on that too.  In fact, Danny McPherson, the company’s Chief Security Officer, has started a blog series on Verisigninc.com that outlines many of the points that have some concern from our perspective and others as well.

We are also in the process of incubating some interesting new ideas that could be quite transformative so perhaps some of those could come out of the lab in Q3 and Q4 of this year also.

Leave a Reply

Submit Comment
*
© 2014 DailyHostNews. All rights reserved. XHTML / CSS Valid.
DHN info@dailyhostnews.com | Submit | Advertise | Close