by Takanori Maehara on 18th December 2025
This is unofficial solutions for the exercises in API Design Patterns by JJ Geewax. Answers reflect my interpretation and may differ from intended solutions. Feedback: @tmaehara
Imagine you need to create an API for managing recurring schedules ("This event happens once per month"). A senior engineer argues that storing a value for seconds between events is sufficient for all the use cases. Another engineer thinks that the API should provide different fields for various time units (e.g., seconds, minutes, hours, days, weeks, months, years). Which design covers the correct meanings of the intended functionality and is the better choice?
The first appraoch is fundamentally flawed for this usage because "month" is not a fixed duration (sometimes 28 days and sometimes 31 days). Therefore, the only choice that aligns with the requirement is the second approach.
The proposed request is like this.
interface Schedule {
seconds: number;
minutes: number;
hours: number;
days: number;
weeks: number;
months: number;
years: number;
}
One thing we need to consider the semantics when multiple fields are specified. In the interval-based schedule, we commonly interpret them as additive. That is, if {days: 1, months: 1} is specified, we interpret this as "next execution is after 1 month and 1 day". This is consistent with ISO 8601 Duration.
Note: This type of schedule is called interval-based schedule (how frequently run). Another type of schedule is pattern-based schedule (when to run).
In your company, storage systems use gigabytes as the unit of measurement (10^9 bytes). For example, when creating a shared folder, you can set the size to 10 gigabytes by setting sizeGB = 10. A new API is launching where networking throughput is measured in Gibibits (2^30 bits) and wants to set bandwidth limits in terms of Gibibits (e.g., bandwidthLimitGib = 1). Is this too subtle a difference and potentially confusing for users? Why or why not?
Yes, it's confusing. Storage uses decimal (GB = 10^9), bandwidth uses binary (Gib = 2^30). Users naturally relate these - "how long to transfer this file" - but mixed units break intuition.
Network bandwidth is conventionally decimal (Gb). Use bandwidthLimitGb to maintain consistency.
Imagine you're building a bookmark API where you have bookmarks for specific URLs online and can arrange these in different folders. Do you model this as two separate resources (Bookmark and Folder) or a single resource (e.g., Entity), with a type in-line to say whether it's acting as a folder or a bookmark?
I recommend modelling as two separate resources. They are fundamentally different entities:
Merging them into a single entity would require conditional fields and validation (e.g., "URL required only if type=bookmark"), making the contract fragile.
That said, when listing folder contents (List[Bookmark | Folder]), a type discriminator field is needed in the JSON payload for deserialisation. This is fine - it's only for mixed-type responses, not a sign that they should be a single resource; see Chapter 16.
Come up with an example and draw an entity relationship diagram that involves a many-to-many relationship, a one-to-many relationship, a one-to-one relationship, and an optional one-to-many relationship. Be sure to use the proper symbols for the connectors.
┌──────────┐ ┌────────────┐ ┌──────────┐
│ Student │>──────<│ Enrollment │>──────<│ Course │
└──────────┘ └────────────┘ └──────────┘
│ │
│ │
1│1 *│
│ │
┌────┴─────┐ ┌─────┴──────┐
│ Profile │ │ Instructor │
└──────────┘ └────────────┘
┌──────────┐
│ Student │
└──────────┘
│
│ (0..*)
○
┌────┴─────┐
│ Review │
└──────────┘
A company in Japan targeting Japanese speakers only wants to use UTF-16 encoding rather than UTF-8 for the string fields in their API. What should they consider before making this decision?
JSON standard (RFC 8259) mandates UTF-8. Using UTF-16 breaks standard compliance, affecting tooling, libraries, and interoperability.
Space savings (their likely motivation) is better achieved with HTTP compression. gzip/brotli typically achieves 70-90% compression on text, far exceeding UTF-16's ~33% savings for Japanese, without breaking standards.
Imagine that ChatRoom resources currently archive any messages older than 24 hours. If you want to provide a way to disable this behavior, what should you name the field that would do this? Is this the best design to provide this customization?
If using a boolean, name it disableArchiving - this ensures false (the zero-value default) corresponds to current behavior (archiving enabled).
However, this is suboptimal. In future, we will add more granular control about archiving hours. To handle that case, it's better to use archiveHours: number field. To avoid the default value problem, the best design will enclose this in map like this:
interface ChatRoomRequest {
archiveConfig?: ArchiveConfig;
}
interface ArchiveConfig {
hours?: number;
}
with value validation on hours field.
If your API is written in a language that natively supports arbitrarily large numbers (e.g., Python 3), is it still best practice to serialize these to JSON as strings? Or can you rely on the standard JSON numeric serialization?
Yes, still serialize as strings. The server's language capabilities are irrelevant. What matters is:
The serialization format must accommodate the weakest link in the chain, not the strongest.
If you want to represent the native language for a chat room, would it be acceptable to have an enumeration value for the supported languages? If so, what would this enumeration look like? If not, what data type is best?
The book assumes enumeration = integer-mapped enumeration. Don't use integer-mapped enums. Use strings for readability. In this case, we could use BCP 47 codes (e.g., en-GB, ja-JP), which include both language and regional variants.
Modern APIs support string-valued enums. Whether to use them depends on the use case:
Use enum if clients must handle each language explicitly (e.g., UI translation, language-specific features). This signals that new language support requires client updates.
Use plain string if language is just metadata. This avoids forcing client updates when new languages are added.
If a specific map has a very uneven distribution of value sizes, what's the best way to place appropriate size limits on the various keys and values?
There are two options.
Option 2 is a workaround when necessary, but it's often a code smell - it's harder to reason about and may indicate the map has too many fields. I prefer Option 1 with refactoring if per-key limits feel excessive.
Design a new encoding and decoding scheme with a larger checksum size, chosen relative to the size of the identifier.
Extend Crockford's optional checksum by using mod 37^k where k scales with identifier length. Encode using the same 37-character set (32 data chars + 5 checksum-only chars).
Calculate the likelihood of a collision for a randomly chosen 2-byte (16-bit) identifier. (Hint: See the birthday problem as a guide.)
The ID space has the size of N = 2^16 (65536). The probability of having at least one collision is approximately 1 - exp(-n^2/2 N). This means if we generate n = 2^8 (256) samples, there is a collision with probability at least 1 - exp(-1/2) ≒ 40%.
Design an algorithm to avoid collisions when using a 2-byte (16-bit) identifier that doesn't rely on a single counter. (Hint: Think about distributed allocation.)
The problem statement is not well-written. The expected answer is likely:
But this technically relies on a single counter in steps 1 and 2. To avoid this entirely, we'd need a generate-then-verify approach, which is impractical due to collision likelihood (Exercise 6.2).
Alternatively, allocate rule-based ranges to each node (e.g., 0x0000-0x7FFF to node 1, 0x8000-0xFFFF to node 2). This avoids the single counter but reveals internal architecture to consumers.
Is it acceptable to skip the standard create and update methods and instead just rely on the replace method?
Sometimes. Drawbacks are:
If we use REPLACE (PUT) instead of CREATE (POST), we lose the ability of the conditional creation (create only if the resource does not exist).
If we use REPLACE (PUT) instead of UPDATE (PATCH), we lose the ability of partial update.
What if your storage system isn't capable of strong consistency of newly created data? What options are available to you to create data without breaking the guidelines for the standard create method?
Two options:
Use a custom method (e.g., ImportLogEntries instead of CreateLogEntry). Custom methods carry no consistency expectations, signaling to users that eventual consistency applies.
Use a long-running operation - if you can determine when replication completes, return an LRO (long-running operation, see Chapter 10) that resolves once the data is fully available.
Why does the standard update method rely on the HTTP PATCH verb rather than PUT?
Because PATCH is for partial update; PUT is for full replace. Using PUT becomes a backward-incompatible change when adding new fields - existing clients would unintentionally overwrite new fields with empty values.
Imagine a standard get method that also updates a hit counter. Is it idempotent? What about a standard delete method? Is that idempotent?
GET with hit counter: Not idempotent - each call changes state.DELETE: Not idempotent - deleting a non-existent resource returns 404.Note: The book takes an imperative view (action-focused) where DELETE means "remove this resource." A declarative view (result-focused) where DELETE means "this resource should not exist" would make it idempotent. Both interpretations exist in practice.
Why should you avoid including result counts or supporting custom sorting in a standard list method?
Both features are easy at small scale but become expensive/complex as data grows.
Imagine a ChatRoom resource might have a collection of administrator users rather than a single administrator. What do we lose out on by using an array field to store these resources? What alternatives are available?
We lose the functionality of modifying a single administrator's data. Alternatives include using map (key: id) or sub-resources.
How do we go about removing a single attribute from a dynamic data structure?
Use explicit field mask with empty body, like
PATCH /chatRooms/1?fieldMask=settings.test HTTP/1.1
Content-Type: application/json
{}
How do we communicate which fields we're interested in with a partial retrieval?
Use field mask.
How do we indicate that we'd like to retrieve (or update) all fields without listing all of them out in the field mask?
For retrieval, unset fieldMask or use fieldMask=*.
For update, use fieldMask=*. (unset will trigger field mask inference)
What field mask would we use to update a key "`hello.world`" in a map called settings?
settings.```hello.world```
What if creating a resource requires some sort of side effect? Should this be a custom method instead? Why or why not?
The book's position is: Use custom method if there is side effect.
I personally think this is not always right. Consider an API to sign-up a service that sends a welcome email to the customer. The book explicitly mentioned using custom method for this pattern (page 108). But sending welcome email is just an implementation detail.
When should custom methods target a collection? What about a parentresource?
Why is it dangerous to rely exclusively on stateless custom methods?
The book's position is: Statefulness tends to creep in over time - custom configurations, versioning, billing, audit trails. Retrofitting state into a stateless design is difficult. Better to attach custom methods to resources from the start, even if the method itself doesn't store the input data.
However, in my view, it's also hard to predict future requirements. Starting stateless is often the right choice (YAGNI). The real danger is not starting stateless - it's refusing to introduce resources when the need becomes clear, ending up with parameter bloat like POST /:translate?model=X&glossary=Y&version=Z.
Why should operation resources be kept as top-level resources rather than nested under the resources they operate on?
Why does it make sense to have LROs expire?
Because completed operations become irrelevant. The result resource is the permanent record, so keeping the operation itself serves no purpose.
If a user is waiting on an operation to resolve and the operation is aborted via the custom cancel method, what should the result be?
The return struct is this:
interface Operation<ResultT, MetadataT> {
id: string;
done: boolean;
result?: ResultT | OperationError;
metadata?: MetadataT;
}
interface OperationError {
code: string;
message: string;
details?: any;
}
Set done = true - this indicates the operation has terminated, not that the work completed successfully.
When tracking progress of an LRO, does it make sense to use a single field to store the percentage complete? Why or why not?
Usually, using other metrics such as recordProcessed and totalRecordsEstimated are more useful.
How do rerunnable jobs make it possible to differentiate between permissions to perform an action and permission to configure that action?
By separating execution from configuration - different endpoints allow different permissions for each.
If running a job leads to a new resource being created, should the result be an execution or the newly created resource?
The newly created resource. Execution is only needed when there's no other resource to hold the results
Why is it that execution resources should never be explicitly created?
The book's expected answer will be: Because executions are created by internal processes.
But this is just a design choice so "should never be" is a bad word choice. Of course, it does not make sense providing the standard CREATE method.
How big does attribute data need to get before it makes sense to split it off into a separate singleton sub-resource? What considerations go into making your decision?
No fixed threshold. The considerations are:
I'd omit other considerations not relevant to the size here (security, volatility, etc).
Why do singleton sub-resources only support two of the standard methods (get and update)?
Because it represents a property of the parent resource. Independent CREATE or DELETE doesn't make sense.
Why do singleton sub-resources support a custom reset method rather than repurposing the standard delete method to accomplish the same goal?
The contract of DELETE is that the deleted resource no longer exists. But sub-resources are properties of parent so its value can be default/empty, but the property itself cannot be missing.
Why can't singleton sub-resources be children of other singleton sub-resources in the resource hierarchy?
The book's expected answer will be: There's no value added - they should be siblings instead.
But "can't" is a too strong word choice. AIP-156 explicitly states "Singleton resources may parent other resources." It's a simplicity preference, not a hard constraint.
When does it make sense to store a copy of a foreign resource's data rather than a reference?
When the data is bounded and we need historical point (e.g., priceAtPurchase). The main reason we should to avoid copy is the size growth + data inconsistency.
Why is it untenable to maintain referential integrity in an API system?
Two strategies exist:
Instead, asking consumers to verify references is much easier.
An API claims to ensure references will stay up-to-date across an API. Later, as the system grows, they decide to drop this rule. Why is this a dangerous thing to do?
The phrase "reference will stay up-to-date" is ambiguous (reference integrity or staled data). Here, we interepret this as reference integrity.
The claim ensures clients can skip reference validation. Dropping this breaks backward compatibility.
Design an API for associating users with chat rooms that they may have joined and that also stores a role and the time when they join.
interface ChatRoomMembership {
id: string;
userId: string;
chatRoomId: string;
role: string;
joinedAt: timestamp;
}
// Standard methods
POST /chatRoomMemberships // CreateChatRoomMembership
GET /chatRoomMemberships/{id} // GetChatRoomMembership
GET /chatRoomMemberships // ListChatRoomMemberships (filter by userId or chatRoomId)
PATCH /chatRoomMemberships/{id} // UpdateChatRoomMembership (e.g., change role)
DELETE /chatRoomMemberships/{id} // DeleteChatRoomMembership (leave room)
In a chat application, users might leave and join the same room multiple times. How would you model the API such that you maintain that history while ensuring that a user can't have multiple presences in the same chat room at the same time?
Create two resources:
ChatRoomMembership - current state, CREATE on join, DELETE on leave
ChatRoomMembershipEvent - append-only log of join/leave events.
When would you opt to use custom add and remove methods rather than an association resource to model a many-to-many relationship between two resources?
GET /memberships/{id})LIST /memberships)When associating Recipe resources with Ingredient resources, which is the managing resource and which the associated resource?
I'd choose Recipe as a management resouce and Ingredient as associated resource.
What would the method be called to list the Ingredient resources that make up a specific recipe?
GET /recipes/{recipeId}/ingredients
When a duplicate resource is added using the custom add method, what should the result be?
Should return 409 Conflict error.
Note: Some real-world APIs instead return the original success response for duplicates. For example, Stripe returns the same result for subsequent requests with the same idempotency key, meaning a duplicate gets 2xx with the existing resource. The book's approach requires clients to treat 409 as "work already done."
Imagine you're creating an API that relies on URL bookmarks for a web browser that can be arranged into collections or folders. Does it make sense to put these two concepts (a folder and a bookmark) into a single polymorphic resource? Why or why not?
Yes, polymorphism makes sense here. Users need mixed enumeration - listing both bookmarks and folders together in the same response.
Note: This doesn't contradict modeling them as separate resources (see Exercise 4.1). They remain distinct resource types with different schemas, but the response uses a polymorphic wrapper when returning folder contents.
Why should we rely on a string field for storing polymorphic resource types instead of something like an enumeration?
Flexibility - new polymorphic types can be added without breaking existing clients.
Why should additional data be ignored (e.g., providing a radius for a Shape resource of type square) rather than rejected with an error?
In general, we should not validate irrelevant fields (for version compatibility). The same is just applied to polymorphic resources for consistent behaviour.
Why should we avoid polymorphic methods? Why is it a bad idea to have a polymorphic set of standard methods (e.g., UpdateResource())?
Each resource has each own operational requirements, which will diverge in future.
When copying a resource, should all child resources be copied as well? What about resources that reference the resource being duplicated?
Child Resources: Yes, must be copied together.
Referencing Resources: Depends on context. Some should be copied, some should add reference to the copy, some should never be copied (e.g., Users). Design decision per relationship type.
How can we maintain referential integrity beyond the borders of our API? Should we make that guarantee?
We can't maintain it - we have no control over external systems.
We shouldn't guarantee it either. The internet is inherently "best effort" - 404s are normal. External referential integrity is not a goal for APIs.
When copying or moving data, how can we be sure that the resulting data is a true copy as intended and not a smear of data as it's being modified by others using the API?
Two options:
If neither is feasible, accept the limitation: copied data may be smeared.
Imagine we're moving a resource from one parent to another, but the parents have different security and access control policies. Which policy should apply to the moved resource, the old or the new?
The new parent's policy. If the resource violates the new policy, reject the move - don't allow the violation or silently modify data to fit. Let the user resolve the conflict.
Why is it important for results of batch methods to be in a specific order? What happens if they're out of order?
To match the order of request. Otherwise, we need a heuristics to match the request and response (especially crucial for batch create).
In a batch update request, what should the response be if the parent field on the request doesn't match the parent field on one of the resources being updated?
Reject the entire batch.
Why is it important for batch requests to be atomic? What would the API definition look like if some requests could succeed while others could fail?
Two reasons.
The API definition for the partial success case would be like
interface BatchUpdateResponse {
results: Array<{
resource?: Resource;
error?: Error;
}>;
}
and the client is required to check each result.
Note: AIP-233/AIP-235 (updated 2025) now recommend allowing partial success in many cases, diverging from the book's atomic-only stance.
Why does the batch delete method rely on the HTTP POST verb rather than the HTTP DELETE verb?
Because DELETE doesn't support a request body in a standardized way, and batch delete requires sending a list of IDs.
Why should the custom purge method be limited to only those cases where it's absolutely necessary?
Because it's dangerous. Criteria-based deletion can accidentally wipe large amounts of data. Unlike batch delete where you explicitly list IDs, a bad filter could match everything. Limiting availability reduces risk of catastrophic mistakes.
Why does the purge method default to executing validation only?
Because it's dangerous. User sees what would be deleted without actually deleting. Forces explicit opt-in to destructive action. Prevents accidental mass deletion from typos or bad filters.
What happens if the filter criteria are left empty?
The book's position: Allow empty filter for consistency with the standard list method (which returns all resources when filter is empty).
Note: AIP-165 marks the filter field as required, diverging from the book. AIP-165 also provides * as an explicit wildcard for "delete everything," making the intent clear rather than relying on an empty/missing value.
If a resource supports soft deletion, what is the expected behavior of the purge method on that resource? Should it soft-delete the resources? Or expunge them?
Be consistent with the standard DELETE operation. If it uses soft-delete (resp., hard-delete), use soft-delete (resp., hard-delete) in purge as well.
What is the purpose of returning the count of affected resources? What about the sample set of matching resources?
Both are safeguards for the validation step.
If we're worried about ingesting duplicate data via a write method, what's the best strategy to avoid this?
Use the client-generated ID (See Chapter 26).
Why should a write method return no response body? Why is it a bad idea for a write method to return an LRO resource?
If we want to communicate that data was received but hasn't yet been processed, what options are available? Which is likely to be the best choice for most APIs?
Returns 202 Accepted (instead of 200 OK)
Why is it important to use a maximum page size rather than an exact page size?
Because it is computationally expensive (imagine the data is distributed over storages).
What should happen if the page size field is left blank on a request? What if it's negative? What about zero? What about a gigantic number?
How does a user know that pagination is complete?
By the empty nextPageToken.
Is it reasonable for page tokens for some resource types to have different expirations than those from other resource types?
From user's perspective it is confusing if one page token is still valid while other becomes invalid.
Why is it important that page tokens are completely opaque to users? What's a good mechanism to enforce this?
If clients can see the token structure, it becomes part of the API surface. Changing implementation would break clients.
A good mechanism is: base 64 on the encrypted data.
What is the primary drawback of using a structured interface for a filter condition?
Poor readability. The filter is essentially a user-provided code, which could be complex and becomes verbose in structured interface.
Why is it a bad idea to allow filtering based on the position of a value in an array field?
The position-based filtering assumes the strict ordering requirement, which might not hold in general.
How do you deal with map keys that may or may not exist? Should you be strict and throw errors when comparing against keys that don't exist? Or treat them as missing?
Be strict and throw errors. Seeing error at this stage is much better than seeing downstream problems.
Imagine a user wants to filter resources based on a field's suffix (e.g., find all people with names ending in "man"). How might this be represented in a filter string?
Prepare a custom function endsWith and use it as endsWith(name, "man").
Note: although the book discourages supporting wildcard matching, AIP-160 supports it. So, implement wildcard matching and use name = "*man" is also an option.
Why is it important to use two separate interfaces for configuring an import or export method? Why not combine these into a single interface?
Because they are two orthogonal concerns:
MessageOutputConfig): contentType, compression, filename templateDataDestination): where to read/writeAdditionally, the storage axis uses polymorphism (S3Destination, SambaDestination) rather than a flat interface with all options. A flat config causes confusion about which fields are required for which backend, and tempts reusing fields for different purposes.
What options are available to avoid importing (or exporting) a smear of data? What is the best practice?
Options are:
The best practice will be: Use snapshot if possible. Otherwise, accept downtime or smear.
When a resource is exported, should the identifier be stored along with the data? What about when that same data is imported? Should new resources have the same IDs?
If an export operation fails, should the data that has been transferred so far be deleted? Why or why not?
No. Reasons:
If a service wanted to support importing and exporting, including child and other related resources, how might it go about doing so? How should identifiers be handled in this case?
This adds significant complexity; so, it is strongly discouraged. Use backup/restore instead.
Would it be considered backward compatible to change a default value of a field?
No, it's not backward compatible.
Imagine you have an API that caters to financial services with a large commercial bank as an important customer of the API. Many of your smaller customers want more frequent updates and new features while this large bank wants stability above all else. What can you do to balance these needs?
Run multiple versions, and setup LTS (long-term support) version.
Right after launching an API, you notice a silly and embarrassing typo in a field name: port_number was accidentally called porn_number. You really want to change this quickly. What should you consider before deciding whether to make this change without changing the version number?
How long has the API been live? / How many clients are affected?
Can you support both field names temporarily?
Is the embarrassment worth the breaking change?
Imagine a scenario where your API, when a required field is missing, accidentally returns an empty response (200 OK in HTTP) rather than an error response (400 Bad Request), indicating that the required field is missing. Is this bug safe to fix in the same version or should it be addressed in a future version? If a future version, would this be considered a major, a minor, or a patch change according to semantic versioning (semver.org)?
It depends.
Pragmatic approach: assess actual client impact before deciding.
When does it make sense to use a Boolean flag versus a state field to represent that a resource has been soft deleted? Does it ever make sense to have both fields on the same resource? Why or why not?
Don't put both fields because of the redundancy.
How should users indicate that they wish to see only the soft-deleted resources using the standard list method?
Use {"includeDeleted": true, "filter": "deleted=true"}
Note: AIP-132 uses show_deleted rather than includeDeleted for this field.
What should happen if the custom expunge method is called on a resource that hasn't yet been soft deleted? What about calling the custom undelete method on the same resource?
When does it make sense for soft-deleted resources to expire? How should the expiration deadline be indicated?
purgeTime field, set when soft-deleted based on policy (e.g., 30 days from deletion). Reset to null on undelete.Note: AIP-148 uses purge_time for this purpose. The book's expireTime terminology was updated in AIP-148 (2023-07) to reduce confusion with other expiration concepts.
In what circumstances might we consider adding support for soft deletion to be a backward compatible change? What about a backward incompatible change?
If we didn't have DELETE, adding soft delete is backward compatible. Otherwise, it is backward incompatible since it changes the response of deleted resources.
Why is it a bad idea to use a fingerprint (e.g., a hash) of a request to determine whether it's a duplicate?
Because submitting two identical requests is valid (eg: create two resources with the same properties)
Why would it be a bad idea to attempt to keep the request-response cache up-to-date? Should cached responses be invalidated or updated as the underlying resources change over time?
What would happen if the caching system responsible for checking duplicates were to go down? What is the failure scenario? How should this be protected against?
The duplication check fails. The potential outcomes will be rejecting all operations (down time) or accepting all operations (duplication). Use high-availability cache to minimize the risk of such situation.
Why is it important to use a fingerprint of the request if you already have a request ID? What attribute of request ID generation leads to this requirement?
It allows us to distinguish duplicate and collision.
Why should we rely on a flag to make a request "validate only" rather than a separate method that validates requests?
To keep the consistency between the actual method and validation method.
Imagine an API method that fetches data from a remote service. Should a validation request still communicate with the remote service? Why or why not?
If the external service supports validation mode, use it. Otherwise, skip and do best-effort local validation.
This is because we don't know the fetch in external service is side-effect free.
Does it ever make sense to have support for validation requests on methods that never write any data?
Yes, e.g., verifying SQL syntax before running it.
Why is it important that the default value for the validateOnly flag is false? Does it ever make sense to invert the default value?
Otherwise, users must set validateOnly=false on every request for actual work.
What types of scenarios are a better fit for creating revisions implicitly? What about explicitly?
Why should revisions use random identifiers rather than incrementing numbers or timestamps?
Incrementing numbers: deletion leaves visible gaps (1, 2, 4) - leaks that something was deleted
Timestamps: collision risk with high concurrency
Why does restoration create a new revision? Why not reference a previous one?
Moving old revision to front would rewrite history and cause confusion.
What should happen if you attempt to restore a resource to a previous revision but the resource is already equivalent to that previous revision?
Still create new revision. Records the user's intent to restore, even if content unchanged.
Why do we use custom method notation for listing and deleting resources rather than the standard methods?
Note: The book's approach is pre-2023-09 design. AIP-162 (updated 2023-09) now treats revisions as a sub-collection, using GET /resources/{id}/revisions and DELETE /resources/{id}/revisions/{revisionId}.
Why isn't there a simple rule for deciding which failed requests can safely be retried?
Because in certain error codes (500, 502, 504) we don't know if the server processed the request or not. Retryability depends on whether repeating is safe even if the request already succeeded.
What is the underlying reason for relying on exponential back-off? What is the purpose for the random jitter between retries?
When does it make sense to use the Retry-After header?
When server knows exactly when retrying will succeed, most commonly rate limiting (server controls when limit resets).
What is the difference between proving the origin of a request and preventing future repudiation? Why can't the former dictate the latter?
Often the origin verifier could forge the origin. Nonrepundability is required to avoid the possibility and the suspicion of malicious verifiers.
Which requirement isn't met by a shared secret between client and server?
Nonrepudiation. Server (verifier) can pretend origin by using the shared secret.
Why is it important for the request fingerprint to include the HTTP method, host, and path attributes?
Because the method, host, and path are parts of request. This is particularly clear if the method has no body (e.g., DELETE).
Are the digital signatures laid out in this pattern susceptible to replay attacks? If so, how can this be addressed?
Yes, susceptible. Signed request can be captured and resent. To address the reply attacks, we can use