gRPC: Keď microservices konečne komunikujú efektívne
gRPC: Keď microservices konečne komunikujú efektívne
Na poslednom projekte som zvolil gRPC ako jednu z hlavných komunikácií medzi microservicami. Musím povedať, že som si ju dosť obľúbil.
Za mojej éry som už zažil viacero riešení - od NetFlix Feign Client, cez čisté REST API, až po message queues. Ale musím priznať: rýchlosť gRPC je super a keď sa ju človek naučí, vie ju rýchlo prepožičovať.
Aktuálne ju používam medzi Spring Boot microservicami, Golang microservices aj Python microservicami. A funguje to parádne.
Najzaujímavejšie? Nahradil som GraphQL Federation Gateway prístupom kde GraphQL Gateway komunikuje s microservicami pomocou gRPC. Prečo? O tom za chvíľu.
Evolution: Od SOAP k gRPC
Prejdime si, ako sa komunikácia medzi službami vyvíjala.
2005-2010: SOAP éra
<!-- SOAP Request - 2KB payload -->
<soapenv:Envelope>
<soapenv:Header/>
<soapenv:Body>
<getUserRequest>
<userId>12345</userId>
</getUserRequest>
</soapenv:Body>
</soapenv:Envelope>
Problémy:
- Obrovský XML overhead
- Slow parsing
- Komplikovaný WSDL
- Pain to debugVýhody:
- ✅ Strong typing
- ✅ Contract-first prístup
- ✅ Enterprise support
Nevýhody:
- ❌ Veľký payload
- ❌ Pomalé
- ❌ Over-engineered
2010-2018: REST + JSON éra
// REST Request - 0.3KB payload
GET /api/users/12345
Accept: application/json
{
"userId": "12345",
"name": "John Doe",
"email": "john@example.com"
}
Zlepšenia:
- Ľahký payload (JSON)
- HTTP štandardy
- Široká podpora
- Easy debuggingVýhody:
- ✅ Jednoduchý
- ✅ Human-readable
- ✅ HTTP caching
- ✅ Široká podpora
Nevýhody:
- ❌ Žiadny strong typing
- ❌ Manuálne API contracts
- ❌ Pomalší než binary protocols
- ❌ HTTP/1.1 limitations
2015-2020: Netflix Feign Client éra
// Feign Client - Declarative REST
@FeignClient(name = "user-service")
public interface UserClient {
@GetMapping("/api/users/{id}")
UserDTO getUser(@PathVariable("id") Long id);
}
// Usage
UserDTO user = userClient.getUser(12345L);
Zlepšenia:
- Deklaratívne API
- Type-safe
- Automatic serialization
- Circuit breaker integráciaVýhody:
- ✅ Type-safe REST
- ✅ Jednoduchá integrácia
- ✅ Resilience4j support
- ✅ Load balancing
Nevýhody:
- ❌ Stále REST (slow)
- ❌ Stále HTTP/1.1
- ❌ Stále JSON overhead
- ❌ Reflection overhead
2020-Present: gRPC éra
// Protocol Buffer Definition - 0.1KB binary payload
service UserService {
rpc GetUser(GetUserRequest) returns (User);
}
message GetUserRequest {
int64 user_id = 1;
}
message User {
int64 user_id = 1;
string name = 2;
string email = 3;
}
Zlepšenia:
- Binary protocol (compact)
- HTTP/2 (multiplexing)
- Bidirectional streaming
- Code generation
- Strong typingVýhody:
- ✅ 7-10× rýchlejší než REST
- ✅ Strong typing out-of-the-box
- ✅ HTTP/2 multiplexing
- ✅ Bidirectional streaming
- ✅ Code generation
- ✅ Multi-language support
Nevýhody:
- ❌ Nie human-readable
- ❌ Steeper learning curve
- ❌ Browser support obmedzený
Prečo som zvolil gRPC?
Scenár: Enterprise mikroservisová architektúra
Potreby:
1. High throughput (10K+ requests/sec per service)
2. Low latency (<50ms)
3. Strong typing medzi services
4. Multi-language support (Java, Go, Python)
5. Bidirectional streaming
6. Automatic code generationAlternatívy ktoré som zvažoval:
Option 1: REST + Feign Client
Výhody:
+ Známe
+ Jednoduchá integrácia
+ HTTP caching
Nevýhody:
- Performance nedostačujúci
- JSON overhead
- Manuálne API contracts
- HTTP/1.1 limitations
Verdikt: ❌ Nedosiahne performance requirementsOption 2: Message Queue (RabbitMQ/Kafka)
Výhody:
+ Asynchronous
+ Decoupling
+ High throughput
Nevýhody:
- Komplikovaný pre request-response
- Latency overhead (message broker)
- No strong typing
- Over-kill pre simple calls
Verdikt: ✅ Dobré pre async, ❌ zlé pre sync callsOption 3: gRPC
Výhody:
+ 7-10× faster než REST
+ Strong typing
+ Code generation
+ HTTP/2 multiplexing
+ Bi-directional streaming
+ Multi-language support
Nevýhody:
- Learning curve
- Binary format (harder debugging)
Verdikt: ✅ Perfect fit!Real-world implementácia
Ukážem vám, ako vyzerá gRPC implementácia naprieč jazykmi.
1. Protocol Buffer Definition (zdieľané)
// user_service.proto
syntax = "proto3";
package com.qaron.user;
option java_package = "com.qaron.user.grpc";
option java_multiple_files = true;
option go_package = "github.com/qaron/user/grpc";
service UserService {
// Unary call
rpc GetUser(GetUserRequest) returns (UserResponse);
// Server streaming
rpc ListUsers(ListUsersRequest) returns (stream UserResponse);
// Client streaming
rpc BulkCreateUsers(stream CreateUserRequest) returns (BulkCreateResponse);
// Bidirectional streaming
rpc ChatWithUsers(stream ChatMessage) returns (stream ChatMessage);
}
message GetUserRequest {
int64 user_id = 1;
}
message UserResponse {
int64 user_id = 1;
string name = 2;
string email = 3;
string role = 4;
int64 created_at = 5;
}
message ListUsersRequest {
int32 page = 1;
int32 page_size = 2;
string role_filter = 3;
}
message CreateUserRequest {
string name = 1;
string email = 2;
string role = 3;
}
message BulkCreateResponse {
int32 created_count = 1;
repeated int64 user_ids = 2;
}
message ChatMessage {
int64 user_id = 1;
string message = 2;
int64 timestamp = 3;
}2. Spring Boot Server (Java)
// 1. Dependencies (build.gradle)
dependencies {
implementation 'net.devh:grpc-spring-boot-starter:2.15.0.RELEASE'
implementation 'io.grpc:grpc-protobuf:1.60.0'
implementation 'io.grpc:grpc-stub:1.60.0'
}
// 2. Service Implementation
@GrpcService
@Slf4j
public class UserGrpcService extends UserServiceGrpc.UserServiceImplBase {
private final UserRepository userRepository;
public UserGrpcService(UserRepository userRepository) {
this.userRepository = userRepository;
}
// Unary call
@Override
public void getUser(GetUserRequest request,
StreamObserver<UserResponse> responseObserver) {
log.info("GetUser request: userId={}", request.getUserId());
userRepository.findById(request.getUserId())
.map(this::mapToResponse)
.ifPresentOrElse(
user -> {
responseObserver.onNext(user);
responseObserver.onCompleted();
},
() -> responseObserver.onError(
Status.NOT_FOUND
.withDescription("User not found")
.asRuntimeException()
)
);
}
// Server streaming
@Override
public void listUsers(ListUsersRequest request,
StreamObserver<UserResponse> responseObserver) {
log.info("ListUsers request: page={}", request.getPage());
userRepository.findAllByRole(
request.getRoleFilter(),
PageRequest.of(request.getPage(), request.getPageSize())
).stream()
.map(this::mapToResponse)
.forEach(responseObserver::onNext);
responseObserver.onCompleted();
}
// Client streaming
@Override
public StreamObserver<CreateUserRequest> bulkCreateUsers(
StreamObserver<BulkCreateResponse> responseObserver) {
return new StreamObserver<CreateUserRequest>() {
private final List<Long> createdIds = new ArrayList<>();
@Override
public void onNext(CreateUserRequest request) {
User user = User.builder()
.name(request.getName())
.email(request.getEmail())
.role(request.getRole())
.build();
User saved = userRepository.save(user);
createdIds.add(saved.getId());
}
@Override
public void onError(Throwable t) {
log.error("Error in bulk create", t);
}
@Override
public void onCompleted() {
BulkCreateResponse response = BulkCreateResponse.newBuilder()
.setCreatedCount(createdIds.size())
.addAllUserIds(createdIds)
.build();
responseObserver.onNext(response);
responseObserver.onCompleted();
}
};
}
private UserResponse mapToResponse(User user) {
return UserResponse.newBuilder()
.setUserId(user.getId())
.setName(user.getName())
.setEmail(user.getEmail())
.setRole(user.getRole())
.setCreatedAt(user.getCreatedAt().toEpochMilli())
.build();
}
}
// 3. Configuration
@Configuration
public class GrpcConfig {
@Bean
public GrpcServerBuilderConfigurer grpcServerBuilderConfigurer() {
return serverBuilder -> {
serverBuilder
.maxInboundMessageSize(10 * 1024 * 1024) // 10MB
.keepAliveTime(5, TimeUnit.MINUTES)
.permitKeepAliveWithoutCalls(true);
};
}
}
// application.yml
grpc:
server:
port: 9090
address: 0.0.0.03. Go Client
// client.go
package main
import (
"context"
"log"
"time"
pb "github.com/qaron/user/grpc"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
)
type UserClient struct {
client pb.UserServiceClient
conn *grpc.ClientConn
}
func NewUserClient(address string) (*UserClient, error) {
conn, err := grpc.Dial(
address,
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithDefaultCallOptions(
grpc.MaxCallRecvMsgSize(10 * 1024 * 1024), // 10MB
grpc.MaxCallSendMsgSize(10 * 1024 * 1024),
),
)
if err != nil {
return nil, err
}
return &UserClient{
client: pb.NewUserServiceClient(conn),
conn: conn,
}, nil
}
func (c *UserClient) Close() error {
return c.conn.Close()
}
// Unary call
func (c *UserClient) GetUser(ctx context.Context, userID int64) (*pb.UserResponse, error) {
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
req := &pb.GetUserRequest{
UserId: userID,
}
return c.client.GetUser(ctx, req)
}
// Server streaming
func (c *UserClient) ListUsers(ctx context.Context, page int32, pageSize int32) ([]*pb.UserResponse, error) {
req := &pb.ListUsersRequest{
Page: page,
PageSize: pageSize,
}
stream, err := c.client.ListUsers(ctx, req)
if err != nil {
return nil, err
}
var users []*pb.UserResponse
for {
user, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
return nil, err
}
users = append(users, user)
}
return users, nil
}
// Usage
func main() {
client, err := NewUserClient("localhost:9090")
if err != nil {
log.Fatal(err)
}
defer client.Close()
// Get single user
user, err := client.GetUser(context.Background(), 12345)
if err != nil {
log.Fatal(err)
}
log.Printf("User: %+v", user)
// List users (streaming)
users, err := client.ListUsers(context.Background(), 0, 10)
if err != nil {
log.Fatal(err)
}
log.Printf("Found %d users", len(users))
}4. Python Service
# user_service_server.py
import grpc
from concurrent import futures
import logging
import user_service_pb2
import user_service_pb2_grpc
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class UserService(user_service_pb2_grpc.UserServiceServicer):
def __init__(self, user_repository):
self.user_repository = user_repository
# Unary call
def GetUser(self, request, context):
logger.info(f"GetUser request: userId={request.user_id}")
user = self.user_repository.find_by_id(request.user_id)
if not user:
context.set_code(grpc.StatusCode.NOT_FOUND)
context.set_details("User not found")
return user_service_pb2.UserResponse()
return user_service_pb2.UserResponse(
user_id=user.id,
name=user.name,
email=user.email,
role=user.role,
created_at=int(user.created_at.timestamp())
)
# Server streaming
def ListUsers(self, request, context):
logger.info(f"ListUsers request: page={request.page}")
users = self.user_repository.find_all_by_role(
role=request.role_filter,
page=request.page,
page_size=request.page_size
)
for user in users:
yield user_service_pb2.UserResponse(
user_id=user.id,
name=user.name,
email=user.email,
role=user.role,
created_at=int(user.created_at.timestamp())
)
# Client streaming
def BulkCreateUsers(self, request_iterator, context):
created_ids = []
for request in request_iterator:
user = self.user_repository.create(
name=request.name,
email=request.email,
role=request.role
)
created_ids.append(user.id)
return user_service_pb2.BulkCreateResponse(
created_count=len(created_ids),
user_ids=created_ids
)
def serve():
server = grpc.server(
futures.ThreadPoolExecutor(max_workers=10),
options=[
('grpc.max_send_message_length', 10 * 1024 * 1024),
('grpc.max_receive_message_length', 10 * 1024 * 1024),
]
)
user_service_pb2_grpc.add_UserServiceServicer_to_server(
UserService(user_repository),
server
)
server.add_insecure_port('[::]:9090')
server.start()
logger.info("gRPC server started on port 9090")
server.wait_for_termination()
if __name__ == '__main__':
serve()GraphQL Gateway + gRPC: Najlepšie z oboch svetov
Tu prichádza game changer. Nahradil som GraphQL Federation Gateway týmto prístupom:
Predtým (Federation):
Browser/Mobile App
↓ GraphQL query
GraphQL Federation Gateway
↓ REST/HTTP (slow)
Microservice A, B, C
Problémy:
- REST overhead medzi GW a services
- JSON serialization/deserialization
- HTTP/1.1 limitations
- Slow performanceTeraz (GraphQL GW + gRPC):
Browser/Mobile App
↓ GraphQL query (HTTP/JSON - user friendly)
GraphQL Gateway
↓ gRPC (fast, binary)
Microservice A, B, C
Výhody:
+ User-friendly GraphQL na frontend
+ Ultra-fast gRPC na backend
+ Best of both worldsImplementácia GraphQL Gateway
// GraphQL Gateway (Node.js + Apollo)
import { ApolloServer, gql } from 'apollo-server';
import * as grpc from '@grpc/grpc-js';
import * as protoLoader from '@grpc/proto-loader';
// Load gRPC clients
const userProto = protoLoader.loadSync('user_service.proto');
const userService = grpc.loadPackageDefinition(userProto).com.qaron.user;
const userClient = new userService.UserService(
'localhost:9090',
grpc.credentials.createInsecure()
);
// GraphQL Schema
const typeDefs = gql`
type User {
id: ID!
name: String!
email: String!
role: String!
createdAt: String!
}
type Query {
user(id: ID!): User
users(page: Int, pageSize: Int, roleFilter: String): [User!]!
}
type Mutation {
createUser(name: String!, email: String!, role: String!): User!
}
`;
// Resolvers (GraphQL → gRPC)
const resolvers = {
Query: {
user: async (_, { id }) => {
return new Promise((resolve, reject) => {
userClient.GetUser({ user_id: parseInt(id) }, (err, response) => {
if (err) reject(err);
else resolve({
id: response.user_id,
name: response.name,
email: response.email,
role: response.role,
createdAt: new Date(response.created_at).toISOString()
});
});
});
},
users: async (_, { page = 0, pageSize = 10, roleFilter }) => {
return new Promise((resolve, reject) => {
const users = [];
const stream = userClient.ListUsers({
page,
page_size: pageSize,
role_filter: roleFilter
});
stream.on('data', (user) => {
users.push({
id: user.user_id,
name: user.name,
email: user.email,
role: user.role,
createdAt: new Date(user.created_at).toISOString()
});
});
stream.on('end', () => resolve(users));
stream.on('error', (err) => reject(err));
});
}
}
};
const server = new ApolloServer({ typeDefs, resolvers });
server.listen().then(({ url }) => {
console.log(`GraphQL Gateway ready at ${url}`);
});Prečo tento pattern funguje?
Frontend (GraphQL):
# User-friendly, self-documenting, flexible
query GetUserWithPosts {
user(id: "12345") {
name
email
role
}
}
Výhody:
+ Declarative data fetching
+ Single endpoint
+ No over-fetching
+ Strongly typed
+ Great developer experienceBackend (gRPC):
GraphQL GW ─(gRPC)─> User Service
─(gRPC)─> Post Service
─(gRPC)─> Comment Service
Výhody:
+ 7-10× faster než REST
+ Binary protocol
+ HTTP/2 multiplexing
+ Strong typing
+ Code generationVýsledok:
- Frontend dostane user-friendly GraphQL
- Backend komunikuje cez ultra-fast gRPC
- Best of both worlds!
Performance benchmarks: Čísla nelžú
Spravil som vlastné benchmarky na našej infraštruktúre.
Test setup
Microservices:
- User Service (Spring Boot + gRPC)
- Post Service (Go + gRPC)
- Comment Service (Python + gRPC)
Load:
- 1000 concurrent requests
- Payload: 1KB - 100KB
- Duration: 5 minút
- Tool: Apache Benchmark + custom scriptsResults: Small payload (1KB)
| Metric | REST + JSON | gRPC | Improvement |
|---|---|---|---|
| Requests/sec | 1,234 | 8,732 | 7.1× |
| Avg latency | 81ms | 11ms | 7.4× faster |
| P95 latency | 157ms | 23ms | 6.8× faster |
| P99 latency | 312ms | 45ms | 6.9× faster |
| Bandwidth | 1.2 MB/s | 0.4 MB/s | 3× less |
Results: Large payload (100KB)
| Metric | REST + JSON | gRPC | Improvement |
|---|---|---|---|
| Requests/sec | 87 | 1,245 | 14.3× |
| Avg latency | 1,147ms | 80ms | 14.3× faster |
| P95 latency | 2,341ms | 134ms | 17.5× faster |
| P99 latency | 4,512ms | 189ms | 23.9× faster |
| Bandwidth | 8.7 MB/s | 2.1 MB/s | 4.1× less |
Key observations
1. gRPC dominuje pri veľkých payloadoch
REST degradácia: 87 req/s (100KB) vs 1234 req/s (1KB) = 93% drop
gRPC degradácia: 1245 req/s (100KB) vs 8732 req/s (1KB) = 86% drop
Záver: gRPC je resilientnejší pri scaling payload size2. Predictable latency
REST P99/Avg ratio: 312ms / 81ms = 3.85×
gRPC P99/Avg ratio: 45ms / 11ms = 4.09×
Záver: gRPC má konzistentnejší performance3. Bandwidth savings
Small payload: 3× less bandwidth
Large payload: 4.1× less bandwidth
Dôvod: Protocol Buffers sú oveľa kompaktnejšie než JSONPorovnania s inými riešeniami
gRPC vs REST
| Kritérium | REST | gRPC | Winner |
|---|---|---|---|
| Performance | Slow (JSON + HTTP/1.1) | Fast (Protobuf + HTTP/2) | 🏆 gRPC |
| Payload size | Large (JSON) | Small (binary) | 🏆 gRPC |
| Strong typing | No (manual) | Yes (codegen) | 🏆 gRPC |
| Streaming | No | Yes (bi-directional) | 🏆 gRPC |
| Browser support | Excellent | Limited | 🏆 REST |
| Debugging | Easy (human-readable) | Harder (binary) | 🏆 REST |
| Learning curve | Easy | Steep | 🏆 REST |
| Caching | HTTP caching | No built-in | 🏆 REST |
Use REST when:
- Public APIs (browser access)
- Simple CRUD operations
- Human-readable responses needed
- HTTP caching is critical
Use gRPC when:
- Internal microservice communication
- High performance critical
- Strong typing needed
- Streaming required
gRPC vs Message Queues (RabbitMQ/Kafka)
| Kritérium | Message Queue | gRPC | Winner |
|---|---|---|---|
| Latency | High (broker overhead) | Low (direct) | 🏆 gRPC |
| Throughput | Very high | High | 🏆 MQ |
| Decoupling | Excellent | None | 🏆 MQ |
| Request-response | Complex | Natural | 🏆 gRPC |
| Async | Native | Possible | 🏆 MQ |
| Ordering | Guaranteed | No | 🏆 MQ |
| Durability | Yes | No | 🏆 MQ |
Use Message Queue when:
- Asynchronous communication
- Event-driven architecture
- Decoupling services
- Message durability needed
Use gRPC when:
- Synchronous request-response
- Low latency critical
- Direct service-to-service calls
gRPC vs GraphQL Federation
| Kritérium | GraphQL Federation | GraphQL GW + gRPC | Winner |
|---|---|---|---|
| Client experience | Excellent | Excellent | 🤝 Tie |
| Backend performance | Slow (REST) | Fast (gRPC) | 🏆 GW+gRPC |
| Complexity | High (federation) | Medium | 🏆 GW+gRPC |
| Type safety | GraphQL only | GraphQL + Protobuf | 🏆 GW+gRPC |
| Streaming | Subscriptions | gRPC streams | 🤝 Tie |
Záver: GraphQL Gateway + gRPC backend = best of both worlds
Best practices z praxe
1. Vždy definujte timeouts
// ❌ ZLÉ: Bez timeout
ManagedChannel channel = ManagedChannelBuilder
.forAddress("localhost", 9090)
.usePlaintext()
.build();
// ✅ DOBRÉ: S timeoutom
ManagedChannel channel = ManagedChannelBuilder
.forAddress("localhost", 9090)
.usePlaintext()
.keepAliveTime(5, TimeUnit.MINUTES)
.keepAliveTimeout(10, TimeUnit.SECONDS)
.build();
UserServiceGrpc.UserServiceBlockingStub stub = UserServiceGrpc
.newBlockingStub(channel)
.withDeadlineAfter(5, TimeUnit.SECONDS);2. Používajte connection pooling
// ❌ ZLÉ: Nový connection pre každý request
func callService() {
conn, _ := grpc.Dial("localhost:9090", grpc.WithInsecure())
defer conn.Close()
client := pb.NewUserServiceClient(conn)
// ...
}
// ✅ DOBRÉ: Reuse connection
var (
conn *grpc.ClientConn
client pb.UserServiceClient
)
func init() {
conn, _ = grpc.Dial(
"localhost:9090",
grpc.WithInsecure(),
grpc.WithDefaultCallOptions(
grpc.MaxCallRecvMsgSize(10 * 1024 * 1024),
),
)
client = pb.NewUserServiceClient(conn)
}3. Streaming pre veľké datasety
// ❌ ZLÉ: Unary call pre 10K records
List<User> users = userService.getAllUsers();
// Memory explosion + timeout risk
// ✅ DOBRÉ: Server streaming
userService.streamAllUsers(request, new StreamObserver<User>() {
@Override
public void onNext(User user) {
processUser(user); // Process one-by-one
}
@Override
public void onCompleted() {
log.info("Streaming completed");
}
});4. Error handling
# ✅ DOBRÉ: Proper error handling
def get_user(user_id):
try:
response = user_stub.GetUser(
user_pb2.GetUserRequest(user_id=user_id),
timeout=5
)
return response
except grpc.RpcError as e:
if e.code() == grpc.StatusCode.NOT_FOUND:
logger.warning(f"User {user_id} not found")
return None
elif e.code() == grpc.StatusCode.DEADLINE_EXCEEDED:
logger.error("Request timeout")
raise TimeoutError()
else:
logger.error(f"gRPC error: {e.code()} - {e.details()}")
raise5. Monitoring a observability
// Interceptor pre logging a metrics
public class MonitoringInterceptor implements ServerInterceptor {
private final MeterRegistry meterRegistry;
@Override
public <ReqT, RespT> ServerCall.Listener<ReqT> interceptCall(
ServerCall<ReqT, RespT> call,
Metadata headers,
ServerCallHandler<ReqT, RespT> next) {
String method = call.getMethodDescriptor().getFullMethodName();
Timer.Sample sample = Timer.start(meterRegistry);
return new ForwardingServerCallListener.SimpleForwardingServerCallListener<>(
next.startCall(call, headers)
) {
@Override
public void onComplete() {
sample.stop(meterRegistry.timer("grpc.server.calls",
"method", method,
"status", "OK"
));
super.onComplete();
}
@Override
public void onCancel() {
sample.stop(meterRegistry.timer("grpc.server.calls",
"method", method,
"status", "CANCELLED"
));
super.onCancel();
}
};
}
}Kedy NEPOUŽIŤ gRPC
Buďme realistickí. gRPC nie je silver bullet.
1. Public APIs pre browsery
Problém: gRPC vyžaduje HTTP/2 + binary
Browsery: Limitovaná podpora
Riešenie: REST alebo GraphQL pre public APIs2. Jednoduché CRUD aplikácie
Problém: gRPC overhead pre simple operations
Over-engineering
Riešenie: REST je dostačujúci3. Debugging na produkcii
Problém: Binary format
Ťažko čitateľné logy
Riešenie: REST pre easier debugging
Alebo extensive logging/monitoring4. Často sa meniace API
Problém: Protobuf schema changes
Backwards compatibility issues
Riešenie: REST s JSON (flexibilnejší)Záver: gRPC je budúcnosť internal communication
Po roku práce s gRPC môžem povedať:
Výhody ktoré nás presvedčili:
- 🚀 7-10× rýchlejší než REST
- 💪 Strong typing out-of-the-box
- 🔄 Bidirectional streaming
- 🌐 Multi-language support (Java, Go, Python, ...)
- 📦 Malé payloady (binary)
- 🎯 Code generation (DRY principle)
Kedy ho používame:
- Internal microservice communication
- High-performance requirements
- Multi-language environments
- Streaming potreby
Kedy ho NEPOUŽÍVAME:
- Public APIs (REST/GraphQL lepší)
- Browser-facing endpoints
- Simple CRUD apps
Best pattern:
Browser/Mobile
↓ GraphQL/REST (user-friendly)
API Gateway
↓ gRPC (ultra-fast)
MicroservicesOtázka nie je "Mám použiť gRPC?"
Otázka je "Prečo ešte stále používate REST medzi microservicami?"
Článok napísal Solution Architect s 10+ rokmi skúseností, ktorý implementoval gRPC v enterprise projektoch medzi Spring Boot, Go a Python microservices. Aktuálne spracovávame 10K+ gRPC requestov za sekundu s priemernou latenciou 15ms.
Stack: Spring Boot + gRPC, Go + gRPC, Python + gRPC, GraphQL Gateway, Protocol Buffers, Kubernetes, Istio service mesh.
Začnite s gRPC dnes. Vaše microservices vám poďakujú.