An Azure service that provides an enterprise-wide hyper-scale repository for big data analytic workloads and is integrated with Azure Blob Storage.
Hi ,
Thanks for reaching out to Microsoft Q&A.
This is a common Mapping Data Flow mistake. The reason Mapping Data Flow is writing the sink using file pattern/partitioned output, so it auto generates names like part-0000… or paer_123. It will never preserve the source file name by default.
You must explicitly pass and set the file name.
- Capture source file name
- In Source settings, enable Column -> Add new column
Add:
Column name: SourceFileName
Value: `$$FileName`
This gives you the original xlsx name.
- Clean column names
Do your existing logic (Derived column or Select) to remove special characters.
Do not touch SourceFileName.
- Convert xlsx to csv
Keep transformations as-is.
Ensure sink format = DelimitedText (CSV).
- Control sink file name
- Go to Sink -> Settings
- Enable File name option
- Choose Output to single file (important)
- Set File name expression:
concat(replace(SourceFileName, ``'.xlsx'``, ``''``), ``'.csv'``)This forces the CSV to keep the original name.
- Folder handling
Keep your ForEach at folder level.
Sink dataset path should point to the target folder only, not file.
Important truths (dont ignore)
- Mapping Data Flow always generates part files unless you force single file.
- You cannot rename files after sink inside Data Flow.
- If performance matters and files are large, do ADF Copy Activity instead of Data Flow.
Recommended alternative (simpler & faster)
If you are only converting xlsx -> csv and renaming:
Use Copy Activity
Enable Preserve hierarchy
Use Dynamic file name in sink
afaik data flow is overkill for this scenario.
Please 'Upvote'(Thumbs-up) and 'Accept' as answer if the reply was helpful. This will be benefitting other community members who face the same issue.