Hi, why did you remove APIKeys and APIKeys API?
I was just about to try and use it. Was there any issue with it?
Hi @danbars ,
It was unfortunately broken after the oauth2 middleware was updated some time back.. It might be pretty easy to add back though. How do you think would be the optimal way for it to work?
The keys for oauth2 are stored in kv-storage but encrypted so that you can't just read out the tokens from storage. The key used for encryption is part of the cookie, so you need both the cookie and access to storage to get a valid token. We could use the same mechanism for api keys?
What kind of endpoints / mechanisms should we have to create api keys? Any other requirements that you can think of?
My use case is slightly different I think. I'll describe it and let me know if you think it fits into your product:
- Users are managed in Auth0 (or anywhere else really)
- When a user logs into the app (UI), currently there's no session cookie, only client side token (maybe I can change that and work with session cookies, but I think it means I'll have to move to my own domain? Will it still be on Auth0 free tier? Need to verify...)
- Somewhere in the UI the user can ask to generate an AppKey. This key is like Github personal keys. It should be something that can be pasted into the user's code and then used in Authorization header in API requests. It should be valid indefinitely or until it is manually revoked (a nice future feature could be to create it with a TTL up front, not mandatory). It is not related to the user's current session. The user can logout and it would still be valid.
- I expect the key to be generated by the worker. It should be stored in the KV store as the KEY, but hashed. The value should be probably the userId + scopes that that key is permitted.
- When the user's code gets to the API with the key as auth header, the worker will hash it, and look for it in the kv store. Then it will check whether this key is allowed to run the request that it wanted to according to the scopes in the value
The secret for the hash should be stored in a vault. Since Cloudflare do not have a vault (there was an attempt but it was never released, hopefully it will someday) the next best place is in environment variables.
WDYT?
I can implement this as part of my own code, but if you think it fits your product I can contribute the code.
Actually I see that secret-vault is supported, so it should be used for storing the hash-secret: https://developers.cloudflare.com/workers/tooling/wrangler/secrets/
Yes, I think that's pretty close to the way I was thinking about it and how the last version worked.
In the last version I split the access and refresh token and stored on halv in kv-storage and one in the api-key, but it felt like an odd solution. I've been playing around a bit with web-crypto now and think it's a better solution (and what I switched to use in the oauth2 handler). The idea is that the cookie (or api-key) consists of two part: one shortid that is used as key for the kv-storage and one encryption key that is used for symmetric encryption/decryption of the storage. This way you need both access to the kv-storage and the full api-key to get access to any access/referesh-tokens.
We could use exactly the same solution for api keys, but there are few things I'm not sure of:
- If we set up an api-key-api (need a better name for that..) it should ideally have access to both the access and the refresh token. If the current user is logged in using the oauth2-handler this works fine, but maybe we could decouple it so it could be used with any auth? One option would be to make it possible to pass refresh_token when creating the api-key?
- The current oauth2-handler uses aes-gcm encryption with PBKDF2 to generate the keys which might be to heavy on the cpu.. Considering if we could switch to aes-cbc which I think will be faster.
- It would be nice to be able to list the keys that are generated and have the possibility to revoke them.
- Would also be nice if the keys added some additional info on the requests before passing them on to the origin, for instance we could define something like scopes to limit the permissions of the token. One option might be to add some more logic on the client, that for instance could fetch a token with limited scope from auth0 and the submit it to an api key?
- The Value in the KV store is already encrypted by cloudflare. There's no need to encrypt it again.
- The AppKey provided by the client can't be used as the Key in the KV store. If it does, it means that someone who gained access to the keys of the KV store can use those keys to generate requests that will be valid. Instead, the Key in the KV store should be HMAC hash of the AppKey
- Generating AppKey should not be dependant on oauth (although this is a perfectly valid use case). It should be the developer's decision where to put this handler in the rule chain in order to make sure that only authorized users are able to generate tokens. If it is placed after oauth/jwt validation, then it will be ok.
- Yes, the value should contain some metadata, including permission scopes
So the overall process as I see it is:
- After user logs into some web app, they ask to generate an AppKey
- A request to generate AppKey hits the ApiGateway and gets to the handler (after the request is authorized using JWT or any other handler)
- The Handler generates an AppKey using
crypto.subtle.generateKey()and encoded as base64 - The AppKey is HMAC-Signed using some secret. The secret is fetched from the vault. The hashed secret is used as a Key in the KV store.
- The Value of the KV store is anything the developer wants. Commonly used data fields can be Scopes, UserId, AccountId, etc. One mandatory field is Username which is used later for validation (see 7). It doesn't have to actually be the user's name per se. It can have accountid or userid value for example.
- The AppKey from step 3 is returned to the client and not stored anywhere! If the user loses it, they should just generate a new one
- The user copies the returned AppKey into his code, using it as Authorization header with Basic auth. The AppKey is the Password part, and the user part is what the developer decided to store as Username in step 5.
- When the user's code makes a request to the API, the request is validated - the Password part of the Authorization header is HMAC-signed (like step 4), and it is used to fetch the AppKey from the KV store.
- The Username from the Authorization header is matched against the Username that is stored in the fetched AppKey
- If validation passed, any data that is stored in the AppKey is added to the request so the developer can use it for further validation (like permission scopes, get user's accountId, etc)
Just note that I refer to static (not-refreshing) AppKeys that can be statically copied into user's code. It is NOT oauth access or refresh tokens. It is like GIthub Personal Keys
I think it's a good idea to encrypt the tokens in kv-storage is a good idea.. Otherwise it's like storing plain text passwords in a normal database. Even if someone get's access to kv-storage you don't want them to be able to access any useable tokens. And it feel good that even me as a host of the service can't extract any tokens, that way I can't access any user data even if I wanted to :)
The app-key could consist of two parts where one part is used as index to kv-storage. It's probably a good idea to hash this key before it's used as an index as well as you mentioned. Haven't thought of it before..
Think you have a point that it shouldn't be depending on the oauth handler. Do we need to have some kind of user to tie the tokens to or doesn't it really matter? Thinking that it would be nice to be able to list your tokens and to able to revoke them? Maybe you need to either a bacic-auth handler or jwt handler before to resolve the user? Is there some other kinds of auth that should be supported?
The nice things with using refresh-tokens is that you can use static api keys in the app. The worker will refresh then automatically refresh the access-tokens automatically once it expires and update the value in storage, so from my perspective it's the best of two worlds. Maybe there's some backside with this solution, but so far I haven't really found any. I wouldn't be surprised if Github Personal Keys also are translated to jwt's before being passed on to their backend systems. But.. there's off course no need to limit it to use jwt's. We could support basic auth or any other static keys as well.
Think the steps you detailed above looks good. From my perspective it would be nice to add some extra encryption of the stored data and support for using refresh-tokens but that can all be optional. What do you think?
I'm not sure I understand how refresh tokens come into play in this use case. Do you mean that these are refresh tokens that are used against the origin API? I don't think that you need to tie AppKeys plugin to how it works.
You can have different types of security mechanisms between Cloudflare and the origin, for example:
- Access token that has to refresh, as you described
- Shared secret that is used to sign the request (also here there are different ways to achieve this)
- Shared secret that is passed as a header
- Public/private keys that are used to sign the request
All of these different mechanisms can be used by the developer as custom code or as separate plugin. The AppKey plugin should only be responsible for the validation of the request that arrives to the worker.
Regarding listing of the keys for a given user - I think you're right that we should make it possible. Perhaps the key has to be
<prefix>-<userid>-<hashedAppKey>
In the flow above I suggested that the username part of the auth header will be stored as part of the KV value. We can make it that it is not stored there. Instead it is the <userid> part of the key.
This means that when an auth header comes with username/password - the password is hashed, and then the key
Using this approach, the developer can write code to list all keys of a given userid (cloudflare list by prefix)
Regarding encryption of the value - the content that is stored in the value, is something applicative that the developer can decide to show the user in the UI where he manages his keys (for example a list of scopes and a logical name for that key). So if it is encrypted, the keys would be available to the developer because his code should be able to decrypt it. So I'm still not sure what is the benefit of encrypting again on top of Cloudflare own encryption. Perhaps I'm missing something here, please let me know.
Hi @danbars, think we're getting closer to a solution :)
The idea with the encryption would be that the key or seed is part of the api key so that it's not available on the server until a request arrives. This way there's no way for a developer to retrieve any tokens without an actual request from the client.
Maybe I can find some time this weekend to take a first stab at it. If so I'll push a branch to the repo
I think it really depends on what you expect to be in the value part of the key. If that's going to be an access token or something alike that is only used in runtime - you're absolutely right. It can be encrypted. But I think that mostly it's going to be metadata that is also used when the key is edited in some UI (like backoffice where the user manages his app settings) , and that UI does not have the appkey available. For example, it could be a name that was given to this key so the user can easily manage his keys. It can be a list of scopes that the user supposed to be able to edit from that UI. All these things are no more secret than any other piece of information that my app is managing for the user. And they should be manageable by the user and probably by myself when I need to support my customers.
I believe that handlers should have 1 job only, and it should be reduced as much as possible. If someone wants to store encrypted data in there, it should be handled by custom code. A "hook" mechanism could be nice for that. For example, the options for the handler could contain onBeforeKeySave callback that the developer can implement to affect what is stored in the key.
{
handlerName: 'appkeys',
path: '/.*',
options: {
onBeforeKeySave: (ctx, appKeyData) => {
const newAppKeyData = {};
// populate the new data as you wish
return newAppKeyData; //this will be saved
}
}
}
I already have a first working draft for the code, I will share it here the next comment
module.exports = function appkeys ({
createPath = '/appkeys',
kvAccountId,
kvNamespace,
kvAuthEmail,
kvAuthKey,
kvKeySeparator = '|',
kvPrefix = 'appkeys',
appKeyPrefix = 'APPKEY',
appKeyHashSecret = 'ReplaceThisStringWithValueSecret',
}) {
async function handleCreateKey(ctx, next) {
try {
if (ctx.request.method !== 'POST') {
return next(ctx);
}
ctx.set('X-Debug-Secret', appKeyHashSecret);
let {accountId, appKeyData, ttl} = await ctx.request.json();
const appKey = await generateAppKey();
const hashedToken = await hmacSign(appKey, appKeyHashSecret);
const kvKey = `${kvPrefix}${kvKeySeparator}${accountId}${kvKeySeparator}${hashedToken}`;
const writeRes = await writeKV(kvKey, JSON.stringify({data:appKeyData, created: new Date().getTime()}), ttl, kvAccountId, kvNamespace, kvAuthEmail, kvAuthKey);
if (writeRes.ok) {
ctx.status = 200;
ctx.body = JSON.stringify({
appKey: `${appKeyPrefix}:${accountId}:${appKey}`
});
} else {
ctx.status = 500;
ctx.body = JSON.stringify({
error: "AppKey could not be generated",
details: await writeRes.text()
});
}
} catch (err) {
ctx.set('X-Debug-Info', err);
ctx.status = 500;
// ctx.body('ERROR '+err)
}
}
async function handleValidateKey(ctx, next) {
try {
// expecting header - Authorization: Bearer prefix:accountid:appKey
const auth = ctx.request.headers.authorization || '';
const tokens = auth.split(' ');
if (tokens.length !== 2 && tokens[0].toLowerCase() !== 'bearer') {
return next(ctx);
}
const tokenParts = tokens[1].split(':');
if (tokenParts.length !== 3 && tokenParts[0] !== appKeyPrefix) {
return next(ctx);
}
const [_, accountId, appKey] = tokenParts;
const hashedToken = await hmacSign(appKey, appKeyHashSecret);
const kvKey = `${kvPrefix}${kvKeySeparator}${accountId}${kvKeySeparator}${hashedToken}`;
const kvRes = await getKV(kvKey, kvAccountId, kvNamespace, kvAuthEmail, kvAuthKey);
const keyData = kvRes.ok ? await kvRes.json() : {}
ctx.state.appkeyData = keyData.data;
return next(ctx);
} catch (err) {
ctx.set('X-Debug-Info', err);
ctx.body = err.stack;
ctx.status = 500;
}
}
return async function (ctx, next) {
switch (ctx.request.path) {
case createPath:
return handleCreateKey(ctx, next);
default:
return handleValidateKey(ctx, next);
}
};
};
async function generateAppKey() {
const key = await crypto.subtle.generateKey({
name: 'HMAC',
hash: 'SHA-256'
}, true, ['sign']);
const exported = await crypto.subtle.exportKey('raw', key);
const exportedString = arrayBuffer2str(exported);
return Buffer.from(exportedString).toString('base64');
}
function arrayBuffer2str(buf) {
return String.fromCharCode.apply(null, new Uint16Array(buf));
}
async function hmacSign(data, secret) {
// encoder to convert string to Uint8Array
const enc = new TextEncoder("utf-8");
const key = await crypto.subtle.importKey(
"raw", // raw format of the key - should be Uint8Array
enc.encode(secret),
{ // algorithm details
name: "HMAC",
hash: {name: "SHA-512"}
},
false, // export = false
["sign", "verify"] // what this key can do
);
const signature = await crypto.subtle.sign(
"HMAC",
key,
enc.encode(data)
);
const b = new Uint8Array(signature);
return Array.prototype.map.call(b, x => ('00'+x.toString(16)).slice(-2)).join("");
}
async function writeKV(key, value, ttl, kvAcccountId, namespace, authEmail, authKey) {
const url = `https://api.cloudflare.com/client/v4/accounts/${kvAcccountId}/storage/kv/namespaces/${namespace}/values/${key}`;
const ttlQueryString = ttl ? `?expiration_ttl=${ttl}` : '';
const headers = {
'X-Auth-Email': authEmail,
'X-Auth-Key': authKey,
};
// eslint-disable-next-line no-undef
const response = await fetch(url + ttlQueryString, {
method: 'PUT',
headers,
body: value,
});
return response;
}
async function getKV(key, kvAccountId, namespace, authEmail, authKey) {
const url = `https://api.cloudflare.com/client/v4/accounts/${kvAccountId}/storage/kv/namespaces/${namespace}/values/${key}`;
const headers = {
'X-Auth-Email': authEmail,
'X-Auth-Key': authKey,
};
// eslint-disable-next-line no-undef
const response = await fetch(url, {
headers
});
return response;
}
Nice! I'll add this in a branch with some tests
If I understand it correctly this solution uses bearer tokens for the api key? I think most bearer tokens are jwt's, but there doesn't seem to be anything in the standard that explicitly states that you can't send whatever you want.. If you look at the swagger docs they seem to recommend using a separate header (X-API-KEY), cookie or querystring : https://swagger.io/docs/specification/authentication/api-keys/ I think it might make sense to support all these option. Think basic or bearer should work fine as well. Most services I found seem to use basic auth.. What's your thoughts?
I started moving your code over to branch #64 .
After starting looking at the code I have a few thoughts:
- It would be nice to ensure that the user is authenticated before they are allowed to push any api-keys. For now there's support for basic auth and oauth2. Guess this should cover most cases?
- Would prefer crypto of the payload over hashing of the api-key. The hashing only protects from that someone getting access to the kv-storage can't get access to the api-keys. On the other hand they have access to the raw tokens, so they can access the service directly with those tokens.
- I'm considering what kinds of tokens should be added to the request before they are passed on? I'm guessing we want some tokens that expire and in that case I guess it's more or less jwt's or signed urls?
At first I wrote this with basic authentication. But then I realized that it just adds an overhead on the client that has to base64 encode it instead of simply paste it. Basic auth was originally designed for a handshake where the server asks for credentials, the user actually fills-in a form and then on submit it is authorized.
In this case there's no such handshake because this is a server-to-server flow. So Bearer token seemed like a better fit. Bearer is meant for "the bearer of this token can access" which is exactly our case.
x-api-key is common probably because this is AWS standard and many others just copy it.
Personally I prefer using standards where possible, so I think Authorization header is better.
In the question of basic vs bearer I preferred bearer for the client's simplicity.
You can see here some a good overview of the alternatives: https://stoplight.io/blog/api-keys-best-practices-to-authenticate-apis/
BTW, query string is not a very good option because it gets logged and cached in different places so it is considered less secure. Sometimes it is used, for example if you have to pass it as the URL inside src attribute of image, iframe or some other file, so you can't control headers.
In these cases, the specific token is often limited to a specific referrer.
Anyway, I agree with you that we can support different options using configuration. Everything I wrote should just affect the default.
Crypto of the payload and hashing of the keys are 2 different concerns.
- Hashing of the keys I think is mandatory. It's like hashing passwords. There's no reason for the service to be able to restore them.
- Regarding crypto of the payload - why do you assume that the payload will necessarily be any kind of tokens? For example, I use it just to store some metadata which is not different than any other data I have in the database. Why shouldn't we leave it to the developer to decide (a) what will be stored in the payload, and (b) if it will be encrypted ? We can provide some utility function to help with encryption if needed.
It would be nice to ensure that the user is authenticated before they are allowed to push any api-keys
I agree with you. From looking at the way you structured the framework I saw that you left it for the developer to use multiple handlers for different concerns. So wouldn't it be defined by the developer to configure 1 rule for authentication followed by authorization rule, and only then create-key rule - all for the same path? This way we allow maximum flexability when using the framework.
It's been a busy week but hope I'll find some time over the weekend.