Files¶
Upload assets (brand guides, writing samples, images) and manage them.
client.files # FilesResource
Methods overview¶
Method |
HTTP |
Scope |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
list¶
def list(
*,
limit: int = 25,
page: int = 1,
sort_by: str | None = None,
order_by: str | None = None,
q: str | None = None,
folder_id: str | None = None,
as_json: bool | None = None,
) -> Paginated[File] | dict
Returns a Paginated envelope of File. Pass q="..." to search by filename, folder_id="..." to scope to one folder.
get¶
def get(
file_id: str,
*,
as_json: bool | None = None,
) -> File | dict
Returns File. file_id accepts either the UUID id or the file_key.
upload¶
def upload(
files, # see below
*,
folder_id: str | None = None,
as_json: bool | None = None,
) -> UploadFilesResponse | dict
Returns UploadFilesResponse — its .files attribute is a list of UploadedFile, one per uploaded file.
files accepts any of:
A path (
strorpathlib.Path)An open binary file handle (
io.BytesIO,open(..., "rb"))A
(filename, fileobj, content_type)tuple (for custom filename or content type)A list mixing any of the above (up to 5 per request)
Limits (server-enforced):
Images (
jpg,jpeg,png,webp): max 5 MB eachDocuments (
docx,pdf,txt): max 25 MB each1–5 files per request
Examples
from pathlib import Path
# Single path
resp = client.files.upload(Path("brand-guide.pdf"))
# Multiple files, targeting a folder
resp = client.files.upload(
[Path("a.png"), Path("b.png"), Path("c.png")],
folder_id="fld_1",
)
# From an in-memory buffer with a custom name
import io
buf = io.BytesIO(pdf_bytes)
resp = client.files.upload([("report.pdf", buf, "application/pdf")])
for f in resp.files:
print(f.id, f.file_url, f.size)
pypresscart opens any file paths you pass and closes them automatically.
Note
MIME types are detected from content, not just the extension. The library sniffs the first 64 bytes for magic-byte signatures (JPEG, PNG, WebP, GIF, BMP, TIFF, PDF, DOC, DOCX/XLSX/PPTX via ZIP + extension). This catches files with wrong or missing extensions.
Precedence on upload:
Magic-byte sniff of the stream
Extension-based guess (:py:func:
mimetypes.guess_type)application/octet-streamfallback
If you want to force a specific type (e.g. from a buffer where you already
know the content), pass a (filename, fileobj, content_type) tuple — the
library uses your value as-is without sniffing.
download¶
def download(
file_id: str,
) -> bytes
Returns the file’s raw contents as bytes. This method doesn’t support dual-mode — there’s no JSON to return.
data = client.files.download("file_1")
Path("local-copy.pdf").write_bytes(data)
For very large files, consider an alternative transport — this endpoint reads the full response into memory.
move¶
def move(
body: MoveFilesRequest | BaseModel | dict,
*,
as_json: bool | None = None,
) -> MoveFilesResponse | dict
Returns MoveFilesResponse — a thin wrapper with a moved_count field.
Body (MoveFilesRequest):
Field |
Type |
Notes |
|---|---|---|
|
|
1–50 per call |
|
|
|
from pypresscart import MoveFilesRequest
client.files.move(
MoveFilesRequest(file_ids=["f_1", "f_2"], folder_id="fld_archive")
)
# MoveFilesResponse(moved_count=2)
Warning
Moving a file to root (out of any folder) requires a dict, not the Pydantic model.
The server requires folder_id to be present in the body as an explicit
null to mean “move to root”. Pydantic models are serialized with
exclude_none=True, which omits the key entirely — the API then returns
400. Use a dict for this one call:
# ✅ explicit null — moves to root
client.files.move({"file_ids": ["f_1"], "folder_id": None})
# ❌ field gets dropped, server returns 400
client.files.move(MoveFilesRequest(file_ids=["f_1"], folder_id=None))
See Dual-Mode I/O.
delete¶
def delete(
file_id: str,
*,
as_json: bool | None = None,
) -> DeleteFileResponse | dict
Returns DeleteFileResponse — has a boolean success field.
Returns {"success": true}.
Recipes¶
Upload and link to a campaign questionnaire¶
uploaded = client.files.upload("writing-samples.docx").files[0]
client.campaigns.link_questionnaire(
"cmp_1",
{
"file_id": uploaded.file_key,
"file_url": uploaded.file_url,
"file_name": uploaded.name,
"file_size": uploaded.size,
},
)
Cleanup old files¶
from datetime import datetime, timedelta, timezone
cutoff = datetime.now(timezone.utc) - timedelta(days=90)
page = client.files.list(limit=200, sort_by="created_at", order_by="asc")
stale = [f.id for f in page.records if f.created_at and f.created_at < cutoff]
for file_id in stale:
client.files.delete(file_id)