Acorel background

Creating a product feed for Channable (and other online commerce tools)

Wilco Menge, 02 November 2022

Your SAP Commerce implementation does not exist in a vacuum. In order to maximize the value you get out of your SAP Commerce Cloud system it makes sense to interact with a number of online commerce tools, such as Google Shopping, Trustpilot, Amazon etc.

There are a lot of solution types out there such as Marketplaces, Advertising platforms, Comparison platform, etc. They all have one thing in common: They allow you to engage more with your customers and increase sales. Acorel can help you leverage these tools. In this blog I will showcase how to supply these tools with data in an efficient way.

In order to use online tools, it is often needed to feed some data, such as product master data, stock data, prices etc. into these systems. Mostly they have some kind of import solution where you upload JSON, XML or CSV files or supply an HTTP(S) or (S)FTP endpoint. As there are a lot of useful tools, it can become quite complex and time consuming to create a separate interface for all of them. Therefore it makes sense to have some kind of solution for that in place. Some obvious requirements are:

Enter Channable

Of course, these challenges are not new or unique, so Channable created a solution to reduce a lot of the complexity in providing online commerce: You feed your data into Channable, and Channable has OOTB feeds to a very large number of online commerce tools.

You can create a free account at Channable to see how it fits your requirements. However, Channable does not support all online commerce tools yet, for example, Trustpilot (an online review tool) is not supported yet. So this means that Channable may not be the only solution to this integration challenge.

Creating a feed for Channable (and other channels)

Normally I would be suggesting to use the Integration API for this task, as it allows you to flexibly define a payload for either:

In our case this is not always a suitable approach, as most data loading processes of online commerce tools do not support features such as pagination, which the OData2WebServices module of the Integration API needs to use.

The lowest common denominator seems to be: Fetching a CSV file from a HTTP(S) endpoint. Almost all tools we have looked at during our research support this. An easy and flexible way of generating a CSV file is simply creating an impex job:

All of these aspects can be managed by just modifying the impex file at runtime.

sample-export-script.impex:

"#% impex.setTargetFile( ""product.csv"" , true );"
INSERT_UPDATE Product;code[unique=true];catalogVersion(catalog(id),version)[unique=true];@price[site='yoursite',translator=com.customer.core.translators.PriceTranslator]
"#% impex.exportItemsFlexibleSearch(""SELECT {pk} FROM {Product} where {code} like 'abc%'"");"

create-job.impex:

INSERT_UPDATE Media; code[unique = true] ; realfilename ; @media[translator = de.hybris.platform.impex.jalo.media.MediaDataTranslator][forceWrite = true]
; sample-export-script.impex ; sample-export-script.impex ; $jarResourceCms/sample-export-script.impex

INSERT_UPDATE ImpExExportCronJob; code[unique = true]; job(code) ; sessionLanguage(isoCode)[default = en]; jobMedia(code)
; productFeedJob ; ImpEx-Export ; ; sample-export-script.impex

INSERT_UPDATE Trigger; cronjob(code)[unique = true]; cronExpression
; productFeedJob ; 0 0 0 * * ?

In this case we need a translator as we do not store prices in the database:

[java]public class PriceTranslator extends AbstractSpecialValueTranslator {

private String siteUid;

@Override
public void init(SpecialColumnDescriptor columnDescriptor) throws HeaderValidationException {
super.init(columnDescriptor);
siteUid = columnDescriptor.getDescriptorData().getModifier("site");
}

@Override
public String performExport(Item item) throws ImpExException {
// A rather interesting way of determining prices…
return Double.toString((Math.random() * (100 – 10)) + 10);
}
}
[/java]

Consuming the feed

Now that we have a job that creates the data for us, we still need a way to expose the data to Channable and other tools. As most tools support a GET HTTP call to an endpoint, let us supply that through the OCC API. The implementation below searches for a CronJob named ‘productFeedJob’ and extracts the requested file from the ZIP file that is attached to the job.

[java]@RestController
@RequestMapping(value = "/{baseSiteId}/productfeed")
@ApiVersion("v2")
@Api(tags = "Customer Product Feed")
public class CustomerProductFeedController {

public static final String CRON_JOB_CODE = "productFeedJob";
@Resource
CronJobService cronJobService;

@Resource
ProductFeedService productFeedService;

@GetMapping("/{filename}")
public void downloadFile(@PathVariable("filename") final String filename, HttpServletResponse response) throws IOException {
ImpExExportCronJobModel cronJob = (ImpExExportCronJobModel)cronJobService.getCronJob(CRON_JOB_CODE);
if (!productFeedService.jobContainsFile(cronJob, filename)) {
response.setStatus(HttpServletResponse.SC_NOT_FOUND);
}
productFeedService.outputFile(cronJob, filename, response);
}
}

public class ProductFeedService {

MediaService mediaService;

public ProductFeedService(MediaService mediaService) {
this.mediaService = mediaService;
}

public boolean jobContainsFile(ImpExExportCronJobModel cronJob, String filename) throws IOException {
try (ZipInputStream zis = new ZipInputStream(mediaService.getStreamFromMedia(cronJob.getDataExportTarget()))) {
ZipEntry entry;
while ((entry = zis.getNextEntry()) != null) {
if (entry.getName().equals(filename)) return true;
}
}
return false;
}

public void outputFile(ImpExExportCronJobModel cronJob, String filename, HttpServletResponse response) throws IOException {
try (ZipInputStream zis = new ZipInputStream(mediaService.getStreamFromMedia(cronJob.getDataExportTarget()))) {

ZipEntry entry;

while ((entry = zis.getNextEntry()) != null) {
if (entry.getName().equals(filename)) {

response.setContentType("text/csv; charset=utf-8");
response.setStatus(HttpServletResponse.SC_OK);

int data = zis.read();
while (data != -1) {
response.getOutputStream().write(data);
data = zis.read();
}
return;
}
}
}
}
}[/java]

Configuring in Channable

In Channable, we need:

 

Conclusion

That’s it. We can now see data entering the Channable system:

From there on – if there is a Channable feed for your online commerce tool in place, you can just activate that feed and let data stream from Channable to your online commerce tool. If Channable does not support your tool, you can just add another impex statement to the impex export file and continue from there.

If you have any questions, don’t hestitate to get in touch!

 

Receive our weekly blog by email?
Subscribe here:

More blogs