[Freeipa-devel] DNSSEC support design considerations: key material handling

Petr Spacek pspacek at redhat.com
Fri Jul 19 16:29:06 UTC 2013


On 17.7.2013 18:25, Simo Sorce wrote:
> On Tue, 2013-07-16 at 17:15 +0200, Petr Spacek wrote:
>> On 15.7.2013 21:07, Simo Sorce wrote:
[...]
>>>> KSK has to be rolled over manually because it requires changes in parent zone.
>>>> (It could be automated for sub-zones if their parent zone is also managed by
>>>> the same IPA server.)
>>>
>>> Is there any provision for using DNSSEC with private DNS deployments ?
>> Yes, it is. DNSSEC supports 'Islands of Security' [Terminology]: DNS resolvers
>> can be configured with 'trust anchors' explicitly. E.g. 'trust domain
>> example.com only if it is signed by /this/ key, use root key for rest of the
>> Internet' etc.
>>
>> [Terminology] http://tools.ietf.org/html/rfc4033#section-2
>
> This means clients would have to be configured to explicitly trust a
> specific key for a zone right ? How hard would it be for us to configure
> IPA clients this way assuming by then we have a DNSSEC aware resolver we
> can configure on them ?
Answer really depends on 'DNSSEC aware resolver we can configure'. Glibc 
doesn't support DNSSEC validation at all.

Fedora 17+ solves this problem via Unbound daemon: Glibc is configured to use 
local DNS server (127.0.0.1 in /etc/resolv.conf) and local Unbound daemon does 
the actual resolution and validation. Details are here:
https://fedoraproject.org/wiki/Features/DNSSEC_on_workstations

Unbound can be configured with custom trust anchor via single line with public 
key in a configuration file, it is really simple.

The problem could be with other clients. I don't know how Windows can handle 
this. There are some options in Group Policy, but I have no idea what can be 
configured and what can't.

>>> Or is this going to make sense only for IPA deployments that have valid
>>> delegation from the public DNS system ?
 >>>
>>> Hmmm I guess that as long as the KSK in the 'parent' zone is imported
>>> properly a private deployment of corp.myzone.com using the KSK of
>>> myzone.com will work just fine even if corp.myzone.com is not actually
>>> delegated but is a private DNS tree ?
>>> Or is that incorrect ?
>>
>> AFAIK there *has to be* delegation via DS record [Delegation Signer, DS] from
>> the parent, but IMHO it could work if only the public key for internal zones
>> is published (without any delegation to internal name servers etc.). I didn't
>> try it, so 'here be dragons'.
>
> Are there test/zones keys that can be used to experiment ?

It is possible to generate own keys, sign own root zone etc. and have own 
private DNS tree, including the root. The only caveat is that you has to put 
your 'root key' to each testing client, but it is relatively simple. I did it 
in lab :-)

After some experiments I found that publishing of DS records will not work 
without proper delegation (including NS records and IP addresses). Delegation 
from public servers is not usable if the clients can't start from public 
servers and go down through the tree to the internal servers.

Reasons are clear from the top-down verification process, as described in:
http://backreference.org/2010/11/17/dnssec-verification-with-dig/

Interesting reading is:
https://www.dnssec-tools.org/svn/dnssec-tools/tags/dnssec-tools-1.3/validator/doc/libval-implementation-notes
Particularly the section "Top-down versus bottom-up validation" is very 
interesting.

It could work if either internal and external zones are delegated from public 
DNS (i.e. DS, NS and A/AAAA records are in place) but the internal name 
servers reply only to internal clients.

To conclude it, purely internal deployments require explicit trust anchor 
configuration in all cases.

Moreover, explicit trust anchor configuration will be required if the 
user/company is paranoid and don't want to publish delegation records for 
internal zones in public DNS.

[...]

>>> No, the problem is that we need to define 'who' generates the keys.
>>> Remember FreeIPA is a multimaster system, we cannot have potentially
>>> conflicting cron jobs running on multiple servers.
>> Right. It sounds like the CRL generation problem. Should we do the same for
>> DNSSEC key regeneration? I.e. select one super-master and let it to handle key
>> regeneration? Or should we find some more robust solution? I'm not against any
>> of these possibilities :-)
>
> Falling back to SPOF should be the last resort or a temporary step
> during development.
Sure. Fedora 20 dead lines are really close. Are you okay with something like 
do-it-in-cron-on-sigle-machine for the first implementation?

> I would like to avoid SPOF architectures if at all possible.
> We could devise a way to automatically 'elect' a master, but have all
> other DNS servers also monitor that keys are regenerated an made
> available in the expected time frame and if not have one of the other
> DNS servers try to assume the leader role.
>
> I have some ideas hear using priorities etc, but I need to let them brew
> in my mind a little bit more :)
I'm curious! :-)

>
> [..]
>
>>>> For these reasons I think that we can define new public key attribute in the
>>>> same way as private key attribute:
>>>> attributetypes: ( x.x.x.x.x NAME 'idnsSecPublicKey' SYNTAX
>>>> 1.3.6.1.4.1.1466.115.121.1.40 SINGLE-VALUE )
>>>>
>>>> The resulting object class could be:
>>>> objectClasses: ( x.x.x.x.x NAME 'idnsSecKeyPair' DESC 'DNSSEC key pair' SUP
>>>> top STRUCTURAL MUST ( cn $ idnsSecPrivateKey $ idnsSecPublicKey ) )
>>>
>>> Will bind read these attributes ?
>>> Or will we have to dump these values into files via bind-dyndb-ldap for
>>> bind9 to read them back ?
>> AFAIK it has to be in files: Private key in one file and public key in the
>> other file. I can't find any support for reading private keys from buffers.
>
> Ok so to summarize we basically are going to load the private key file
> in idnsSecPrivateKey and the public key file in idnsSecPublicKey as
> blobs and the have bind-dyndb-ldap fetch them and save them into files
> that bind can access.
> This means bind-dyndb-ldap will need to grow the ability to also clean p
> and synchronize the files over time. So there will need to be hooks to
> regularly check all needed files are in place and obsolete ones are
> deleted.
Syncrepl will notify bind-dyndb-ldap about each change via LDAP, it should be 
enough if we decide to do key management in a helper daemon.

 > Maybe we can grow a companion python helper to do this, as it
> is a relatively simple task, that is not performance critical and will
> be much easier to write in a scripting language than in C. But I am not
> opposed to an in-daemon solution either.
Again, schedule is tight: Could we do something simple & stupid for the first 
version? Of course, we have to warn: 'We have first DNSSEC implementation, but 
it is fragile!' :-)

> [..]
>
>>> ack, should we have an explicit attribute that tells us what type it
>>> is ?
>> May be, we will see. It is possible that we will need to store key algorithm
>> and key ID explicitly for some reason.
>>
>> I'm still not sure that I understand to all aspects of key management in BIND.
>
> Ok, so I guess we need some reasearch in here before committing 100% to
> a plan, but so far it looks like the general plan is clear enough.

I spent some time with the code and it seems that:
- Public and private keys have to be in separate files
- File names have to follow special format
  - We are able to reconstruct file name by this:
  - 1) Save keys to temporary directory
  - 2) Load & parse keys to the memory
  - 3) Get key metadata and reconstruct correct file names
  - 4) Save keys again and remove temporary files
  - This magic will be unnecessary if we decide to save algorithm *number* 
(not a readable name) and key id explicitly in LDAP

>>> One weak reason to allow read by admins would be to allow them to
>>> migrate away, but I do not like to put these keys completely unprotected
>>> in LDAP. Given bind has a keytab I was thinking we may want to encrypt
>>> these keys with the DNS long term key, however this complicates the code
>>> slightly in 2 ways:
>>> 1. we can have multiple DNS Servers (ie multiple keys)
>>> 2. we need to allow for roll-over
>>
>> I see two more problems:
>> 3. it would make Kerberos required (now the plugin doesn't require Kerberos)
>
> It could be optional, if the option is not enabled no master-key
> encryption scheme is used.
>
>> 4. we use SASL but I think that your approach would require direct
>> manipulation with keytab
>
> Yes we'd need to read the keytab to get the long term key, but we could
> defer this to a companion tool in python as I mentioned before and let
> bind-dyndb-ldap ignorant of the keytab.
> Actually this is a quite compelling argument, as this means we could use
> gssproxy to not give access to the keytab at all to the bind process,
> thus adding extra privilege seapration and protection, The python
> companion would instea dneed access so communication between
> bind-dyndb-ldap and the companion daemon would need to happen cross
> trust boundary, ie a socket and/or systemd/dbus activation or similar.
May be. At this point the separate daemon sounds like really good idea.

[...]

>>>> The rest of the configuration options are related to the key management
>>>> problem. We need to know:
>>>> - how many key pairs (e.g. 2 KSKs, 2 ZSKs)
>>>
>>> Shouldn't we allow an arbitrary number ? Does bind have strict limits ?
>> Yes, arbitrary number sounds fine. 2 + 2 was just an example.
>>
>>>> - when (e.g. generate new key pair 30 days before active key expires)
>>>
>>> probably needs to be tunable. new attribute ?
>>>
>>>> - of which key types (KSK or ZSK)
>>>> - with which algorithms
>>>> - with which key lengths
>>>> should be generated. Note that we need to store configuration about X KSKs and
>>>> Y ZSKs.
>>>
>>> seem all of these needs to be tunables and require their own
>>> attributes ?
>> I agree. The question is how to group the attributes to make it useful.
>
> ok
>
>> IMHO it should express something like this:
>> - I want to use 1 KSK with algorithm RSASHA1, key length 2048 bits, the key
>> should be used for 1 year.
>> - I want to have 1 other KSK (with same parameters) ready for roll over at any
>> time.
>> - Roll over period is 1 month. (The time required for incremental resigning
>> with the new key, i.e. the time period when old and new signatures will co-exist.)
>>
>> The result should be:
>>
>> In time 0 (zone creation), generate 2 KSKs:
>> The KSK 'A' would have these timestamps:
>> - created = published = active from = 0 (generate signatures immediately)
>> - inactive = 0 + 1 year - 1 month (stop generating signatures after 11 months)
>> - delete = 0 + 1 year (one month was transitional period for incremental
>> resigning with the new key, then delete the key 'A')
>>
>> The KSK 'B' would be generated at the same time as 'A':
>> - created = published = 0 (publish key, but don't generate signatures)
>> - active from = 0 + 1 year
>> - inactive = 0 + 2 years - 1 month
>> - delete = 0 + 2 years
>>
>> During the first year, all records will be signed with KSK 'A'. In time '0 + 1
>> year - 1 month' KSK 'A' will become inactive and KSK 'B' will become active.
>> At the same time, new KSK 'C' will be generated with following timestamps:
>> - created = published = 1 year - 1 month (publish key, but don't generate
>> signatures)
>> - active from = 0 + 2 year - 1 month
>> - inactive = 0 + 3 years - 1 month
>> - delete = 0 + 3 years
>>
>> All records will be re-signed using KSK 'B' during time <1 year - 1 month, 1
>> year>.
>>
>> In time  '0 + 1 year' all signatures were regenerated with using KSK 'B' and
>> KSK 'A' will be removed.
>
> shouldn't we regenerate all signatures with KSK B at '0 + 11mo', ie 1
> month before KSK A is finally deleted ?

BIND will handle this auto-magically for us. We just need to specify how long 
is the period between 'obsoleting' of the key and removing of the key from 
zone. AFAIK the current code in BIND9 computes required re-signing rate and 
does re-signing incrementally to not kill CPU on the server.

IMHO important point is that each individual signature has validity-end 
timestamp and this timestamp limits TTL in all caches. Invalid signatures will 
be purged when the old key expires (at the latest). This forces clients to get 
new signatures, even if the record was re-signed in the last minute.

Naturally, cache is very important part of DNS and nobody wants to flush all 
data from all caches at once.

>> Note that you may want to use 1 KSK with algorithm RSA and another KSK with DSA.
>
> Ok
>
>> I'm not really sure if it makes sense. Could something like this work?
>>
>> objectClasses: ( x.x.x.x.x NAME 'idnsSecKeyGroup' DESC 'DNSSEC key group' SUP
>> top STRUCTURAL MUST
>> ( cn $
>>     key type $ # (KSK or ZSK)
>>     algorithm $ # (RSA, DSA)
>>     key length $ # (2048)
>>     lifetime of the key $ # (1 year)
>>     roll over period $ # (1 month)
>>     number of active keys $ # (1)
>>     number of spare keys $ # (1)
>>    ) )
>
> To summarize it seem you have 2 distinctive objects you really need.
>
> 1. additional parameters to the key (active time and delete time, and
> although those times are also embedded in the blob they should probably
> be replicated also in idnsSecKeyPair objectclass so that this info can
> be easily queried w/o needing access to the key material itself.
It makes sense, but we have to solve synchronization if some timestamp is 
changed. It is possible to change timestamps via dnssec-settime utility (e.g. 
timestamp of revocation).

> 2. a policy about how often you want to rotate and use keys, and what
> parameters to use by default when creating new keys automatically at
> rotation time. And that policy may need to be per zone.
>
> so
>
> I would rename 'idnsSecKeyGroup' to 'idnsSecKeyPolicy', make it
> auxiliary and add it to the zone object when DNSSEc is in use.
> The attributes you mention seem all is needed.
>
>> The key group could be stored as
>> cn=ksk-rsa, idsname=example.com, cn=dns, dc=ipa,dc=test
>> Keys could be stored as individual objects under
>> cn=id, cn=ksk-rsa, idsname=example.com, cn=dns, dc=ipa,dc=test
>
> This is also an option, but why an aditional object when we can simply
> ad the attributes to the zone file ?
 >
> Is there any instance where you might want multiple policies for zones?
 > (I do not see how that would work)
Yes, there is. My example didn't mention that explicitly, but Remember that 
there are at least two groups of keys: KSK (long-term) and ZSK (shorter-term).

For this reason we need to store two distinct sets of 'policy' attributes at 
least. The situation will be even worse if you need to migrate to different 
algorithm/key length etc., because it can (temporarily) increase number of 
'policies'.

More specifically, because we need at least:
cn=ksk-rsa, idsname=example.com, cn=dns, dc=ipa,dc=test
cn=zsk-rsa, idsname=example.com, cn=dns, dc=ipa,dc=test

E.g. for migration to DSA (but still with RSA in place):
cn=zsk-dsa, idsname=example.com, cn=dns, dc=ipa,dc=test

>> In usual cases it solves grouping of KSKs and ZSKs.
>
> Why do you need to further 'group' them ?
There is no technical reason. It could simplify some UI operations (listing 
all KSKs, deleting whole group of keys at once etc.), but that is all. We can 
live without it.

>>   Also, it enables us to
>> define key group with RSA and another one with DSA algorithm or to migrate
>> from a key group with shorter keys to a key group with longer keys.
>
> I am not sure how this would help, isn't is sufficient to add the 'type'
> to 'idnsSecKeyPair', so you know what type it is regardless ? Although I
> am not even sure why we would care, bind is the one doing the signing
> and the type is already known to bind as it reads it from the private
> file, right ?
Explicit attribute for key type could simplify UI and key manipulation, but it 
not truly necessary.

> I may be missing some detail here ...
>
> [..]
>
>> I have a crazy idea: Could OpenSSL PKCS#11 implementation help us to deal with
>> the key management problems mentioned above - somehow?
>>
>> I know next to nothing about PKCS#11 and related areas, but problems we met
>> (like safe key storage & transport) sounds like something common.
>>
>> Could we use PKCS#11 in some clever way, let OpenSSL to do the dirty work with
>> key encryption/retrieval and solve HSM support and security at once?
>
> Yes we will certainly reuse crypto primitives from NSS or OpenSSL, but
> that doesn't help with 'key management' itself, that's on us :)
>
>> Would it be possible to write PKCS#11 module for OpenSSL and let it to store
>> keys in IPA - in some generic way? So it will solve key storage for all
>> OpenSSL/PKCS#11 enabled applications and not only for bind-dyndb-ldap?
>
> Dogtag has a generic store, but a generic store is not really our
> problem here. and I am not sure we want to tie this to dogtag, we might.
>
>> As I said, it is just crazy idea ... and I really know nothing about this area.
>
> Let's explore and see pros/cons.
> , the main con I see to involving dogtag is that it would force us to
> install dogtag on each DNS server (or risk having keys in a remote
> server so net communicationa nd additional issues of reachability).
>
> I think we want to be able to keep installing FreeIPA+DNS and FreeIPA+CA
> and not force FreeIPa+DNS+CA for all DNS servers, as we might want a
> 'lightweight' FreeIPA+DNS replica or even in future just a LDAP+DNS
> 'replica' only for load balancing of DNSes, so forcing the full CA
> dependency would be undesirable. Extreme case, given syncrepl will keep
> everything synced locally we may even go with bind+bind-dyndb-ldap
> *only* and rely on remote LDAP server to bind to ...
I agree.

I still like the idea of 'The Something' which provides PKCS#11 interface. The 
'something' could be configured to use Dogtag or real HSM or our 
home-grown-IPA-key-storage or e.g. 'softhsm' storage provided by the same 
package in Fedora.

It would allow us to simply switch to really secure storage if it is required 
for particular deployment or stay with files on disk very simply.

Remember that I have no idea what 'The Something' could be and how it could 
work! :-)

I just think that general key storage storage with well-defined interface is 
better than our private magic tool. Other programs could use it for key 
storage if it is general enough.



The most important question at the moment is "What can we postpone? How 
fragile it can be for shipping it as part of Fedora 20?" Could we declare 
DNSSEC support as "technology preview"/"don't use it for anything serious"?

Have a nice weekend.

-- 
Petr^2 Spacek




More information about the Freeipa-devel mailing list