01
Standard upload
Fits common input files that can be submitted directly into a controlled upload flow before later processing or manual confirmation.
File Upload & Result Delivery
This page focuses on the upload, processing, and download experience that customers actually touch.
ExecFabric already supports connecting input files, batch status, processing artifacts, and result download into one execution chain that customers can understand. The real point is not just whether a file can be uploaded. It is how the file is handled inside a controlled process and how the result comes back inside a delivery boundary.
01
Fits common input files that can be submitted directly into a controlled upload flow before later processing or manual confirmation.
02
If the file is large, it can be uploaded in chunks so a transfer failure does not force the whole package to restart.
03
A file does not disappear after upload. It enters a batch so later processing, tracking, and result matching stay clear.
04
If the task outputs a file, the platform can return that result file as a formal downloadable artifact.
End To End
The customer submits the input file through the upload entry opened for the current project. The exact entry depends on the delivery scope.
The file first enters a controlled batch so later processing, tracking, and result matching remain clear.
The platform decides whether to process immediately, wait for an executor, or enter a longer business workflow according to the current project setup.
The user can directly see where this batch of files currently is in the process.
If the task outputs Excel files, reports, summary files, or other formal results, the platform organizes them into a deliverable entry.
After the user gets the result file, they can download it, review it, or continue into the next business step.
Chat Binding
The current design does not ask the script to scan a whole upload directory or guess from file names. It explicitly binds the batch number from this upload to the current chat execution.
Users can now click Upload File in the quick area at the bottom of the chat execution view. It sits next to Upload Script, and the adjacent help icon explains that this batch will be bound to the current session.
After upload, the current chat session stores this uploadBatchId. When the user confirms execution of a Skill, the backend passes the snapshot of that file batch into the execution.
At runtime the script gets the current batch number, input file list, and batch directory information. It processes the files bound to this confirmed execution, not a historical batch from somewhere else.
| Stage | What happens now | Why it matters |
|---|---|---|
| Upload file | The frontend creates a controlled batch | The file enters a traceable batch first instead of being scattered across ad hoc uploads. |
| Trigger chat execution | The current session carries uploadBatchId | This makes it explicit which input batch the execution should consume. |
| Confirm execution | The backend fills in a fileService snapshot | The executor receives the batch number, directory, and input-file list. |
| Run the script | It reads only the currently bound batch | This avoids accidentally reading another batch, another customer, or older files. |
| What the script can read directly | Current meaning |
|---|---|
EXECFABRIC_UPLOAD_BATCH_NO | The currently bound batch number, used to decide which input batch this execution should consume. |
EXECFABRIC_UPLOAD_INPUT_FILES_JSON | The input-file array for the current batch. The script can directly read fields such as fileName, fileId, and size. |
EXECFABRIC_UPLOAD_FILE_SERVICE_JSON | The full file-service snapshot, including context such as batchNo and inputFiles. |
EXECFABRIC_SKILL_INPUT_PAYLOAD_JSON | The full input payload for this execution, which also keeps uploadBatchId and fileService. |
Both Python and Shell executors currently inject these variables into the runtime environment. If the user uploads a new batch of files and confirms execution again, the new batch replaces the old one. Scripts should always read inputs from the current execution context instead of relying on unstable rules such as "the latest file."
Fit
| Scenario | Why it fits | Fit level | Notes |
|---|---|---|---|
| Upload raw files first, then generate reports | Both the input and output are files | Already a good fit | For example result sheets, summary tables, exports, or processed artifacts. |
| The file is large and a one-shot transfer may fail | A more stable upload method is needed | Already a good fit | Chunked upload is better than retrying the whole package. |
| You need to preserve the processing status of a batch of input files | It is easier to track the batch and the result | Already a good fit | The platform emphasizes the batch view instead of showing only "upload succeeded." |
| You only need text Q&A and no files are involved | The file path is not the main concern | No need to force it | Those scenarios should start with intelligent execution and the AI recommendation path. |
Current Scope
Safety Rule
Next Read
The file path is only one part of delivery. It is easier to evaluate a project completely when you read it together with customer flow, deliverables, and the onboarding checklist.