MrObama2022 wrote: 2025-07-21 15:32:33
Ok, I just finished. This works on my pc
BUT PLEASE DON'T USE. THIS IS ONLY FOR STUDY, LOOK AT THE CODE BUT DON'T USE. I HAVE TO DO A LOT OF TEST. REPEAT: DON'T USE AND WAIT, THIS IS AN HIGHLY EXPERIMENTAL VERSION
ExternalCurlHandler.pas
FilmAffinity (ES).ifs unofficial 5.0 alpha 1
Here’s how it works. All the work is handled by an external library,
ExternalCurlHandler.pas.
FilmAffinity (ES).ifs is based on the old version of the script and works as usual. Very little has changed:
now it uses
ExternalCurlHandler.pas instead of
StringUtils7552 (which is already called by
ExternalCurlHandler.pas); it uses
GetPage5Advanced() instead of
GetPage(), and when starting, it performs a check to see if the PC setup is okay (
if (not setupScript()) then exit;).
So far, everything is very simple. This approach allows any script to make use of
ExternalCurlHandler.pas without being heavily changed: you just reference the unit in place of
StringUtils7552, replace
GetPage() with
GetPage5Advanced(), and (optional) apply the initial check.
The library does one simple thing: through the
setupScript() function, it creates a subfolder in which it places a single batch file.
If your catalog is located at
D:\catalogs\movies.amc, the subfolder will be
D:\catalogs\movies_curlscript\, and inside it, the created file will be called
setup.bat.
This file needs to be run
only once (I still have to make changes to handle more catalogs!).
So, the first time
FilmAffinity (ES) is launched, it creates this file and you’ll be asked to close the window and run the `setup.bat` file.
The
GetPage5Advanced() function, on the other hand, is an alternative to
GetPage5() with the same parameters, but it is based on
curl.
It writes the URL (and the headers) to call into a text file (
curlRequestHeaders.txt) and waits for a response text file (
curlOutput.html).
At the moment, the function still needs to be completed, but it already works partially.
This was my original idea.
The magic is done by the
setup.bat file, which performs some fairly complex operations:
1. It checks if Task Scheduler has a process called
ExternalCurlHandler. If this process doesn’t exist, it creates it (it asks for admin permission).
This process will automatically run every time the PC starts and will execute the file
ExternalCurlHandler.ps1.
The
ExternalCurlHandler.ps1 file is the one that waits for
curlRequestHeaders.txt to appear and then generates the `curlOutput.html` file.
2. It creates the file
ExternalCurlHandler.ps1 if it doesn’t already exist.
3. It manually starts
ExternalCurlHandler.ps1 if it's not already running (so you don't need to reboot your pc).
I’ve tested the entire setup locally on my PC and it works. You need Windows 10 or 11, I don't know if this could works on Windows 7 or 8 (maybe ...)
For the end user, aside from the initial setup, everything is transparent — nothing changes, and the script works as it always has.
No browser launches, execution time is the same — everything is identical:
100% the same as before.
I tested both by searching with "Superman" and by directly providing the URL:
[
https://www.filmaffinity.com/es/film811435.html](
https://www.filmaffinity.com/es/film811435.html)
The only limitation of this solution is image handling — but fortunately, for movies, there are no changes needed in this script regarding that.
This is a "last chance option" when everything fails and you still need to access to a site. This will never be an official version and you will never find it in official release and official update scripts.