Sometimes things go wrong and you need to recover access to an EKS cluster using only the root account. Here’s one way of restoring the kubectl command to its former glory.

According to stack overflow:

if you don’t have authority to assume the IAM entity (user or role) that creates the cluster, you’re SOL

By design, I had an IAM user who has access to the cluster, so to regain access I just needed to:

  1. Create a new access key
  2. list the clusters
[email protected]:~$ eksctl --region REGION_NAME --profile PROFILE_NAME get cluster
2021-07-15 12:28:43 [ℹ]  eksctl version 0.54.0
2021-07-15 12:28:43 [ℹ]  using region us-east-1
NAME		REGION		EKSCTL CREATED
...
  1. Update kubectl to grant access
  2. Test access works:
    AWS_PROFILE=PROFILE_NAME kubectl --cluster arn:aws:eks:OBTAINED_FROM_PREVIOUS_COMMANDS get svc
    

If you did everything right you now have access to the cluster. If not, you will need to find an AWS account that has access

[email protected]:~$ eksctl –region us-east-1 –profile jfrog-god get iamidentitymapping –cluster yolk-dev 2021-07-15 12:29:33 [ℹ] eksctl version 0.54.0 2021-07-15 12:29:33 [ℹ] using region us-east-1 ARN USERNAME GROUPS arn:aws:iam::581165678935:role/eksctl-yolk-dev-cluster-FargatePodExecutionRole-YW3O9KT2W0T1 system:node: system:bootstrappers,system:nodes,system:node-proxier arn:aws:iam::581165678935:user/andrew.dever admin system:masters arn:aws:iam::581165678935:user/cloudformation admin system:masters arn:aws:iam::581165678935:user/geoff.williams admin system:masters arn:aws:iam::581165678935:user/greg admin system:masters arn:aws:iam::581165678935:user/jfrog.pipelines admin system:masters arn:aws:iam::581165678935:user/jfrog.pipelines.god admin system:masters