[FX.php List] Anyone using XSLT anymore?

DC dan.cynosure at dbmscan.com
Wed Sep 20 10:04:15 MDT 2006


I just remembered that in order for this architectural change to work  
you must be using a version of the WPE that lets you return the data  
from a layout that is different from your search layout. otherwise  
there is no way to stop the HTML field that you want to search from  
spewing itself into the XML result. so, this scheme wouldn't work in  
FMP6.

dan

On Sep 20, 2006, at 11:59 AM, Bob Patin wrote:

> Dan,
>
> I think you've made the perfect suggestion!
>
> I'll populate the database with their HTML, then (as you suggested)  
> write a script to output the HTML into files that the database then  
> references, just like I would do if I were displaying photos. I  
> would think Troi File would probably output the text files; if  
> anyone knows of a better choice for this, please let me know...
>
> So the database would have to actually write the files and then  
> send them over to a different machine, unless I put all of this  
> (FMSA, OS X Server) on the same machine. I've never run FMSA  
> alongside OS X Server before; would this be a crazy way to do it?
>
> Thanks for your help (and all you guys)! I think I see daylight!
>
> Bob Patin
> Longterm Solutions
> bob at longtermsolutions.com
> 615-333-6858
> http://www.longtermsolutions.com
>
>   CONTACT US VIA INSTANT MESSAGING:
>      AIM or iChat: longterm1954
>      Yahoo: longterm_solutions
>      MSN: tech at longtermsolutions.com
>      ICQ: 159333060
>
>
> On Sep 20, 2006, at 10:42 AM, DC wrote:
>
>> Bob,
>>
>> My earlier suggestion *was* to just set up some kind of caching  
>> system.
>>
>> But since you say you have to maintain the user interface in FMP  
>> you *really should look into* my suggestion to store HTML in FMP  
>> (for searching) and a copy of every HTML file in the filesystem  
>> (for serving with a php include(). Search using FX.php then serve  
>> the file using a path stored with the HTML data. The trick here is  
>> to keep the HTML in the database records and the HTML files in  
>> sync. i believe there is a plugin that can write files for you  
>> from field data. a simple script every quarter would keep the HTML  
>> in sync.
>>
>> But, if you already have an HTML caching system in place... you  
>> can try the following. note, i recommend using the above solution  
>> rather than trying to build an HTML output cache and scripting a  
>> query sending spider using curl! Scripting a query spider to build  
>> a cache is not a quick weekend project. It might be fun and  
>> interesting, but the above architectural solution is better.
>>
>> forging ahead... i originally suggested using wget (a GUI variant  
>> of it exists) but, it doesn't smoothly handle form requests.
>> If you know all of the possible permutations of queries ahead of  
>> time you can just write a PHP script that uses foreach() and loops  
>> through and calls curl with every permutation.
>>
>> I usually use system_exec() to call curl - see the php manual and  
>> terminal 'man curl' for details on every aspect of it. curl can  
>> send form elements via POST using the -d switch.
>>
>> To send form input via POST I've demonstrated the correct syntax  
>> by using your login page for squirrel mail:
>>
>> curl "http://webmail.longtermsolutions.com:16080/webmail/src/ 
>> redirect.php" -d login_username=danFromFXlist -d  
>> secretkey=supersecretpassword -d js_autodetect_results=0 -d  
>> just_logged_in=1
>>
>> in php it would look something like this:
>> $returned_page = system_exec('curl "http:// 
>> webmail.longtermsolutions.com:16080/webmail/src/redirect.php" -d  
>> login_username=danFromFXlist -d secretkey=supersecretpassword -d  
>> js_autodetect_results=0 -d just_logged_in=1');
>>
>>
>> if you change these to the correct squirrel mail login and  
>> password you should get a confirmation login page from your site.
>>
>> HTH,
>> dan
>>
>> On Sep 20, 2006, at 11:03 AM, Bob Patin wrote:
>>
>>> No, I never did, and was going to look for the email where you  
>>> mentioned a caching "spider" that would cache during off-peak  
>>> hours. Does it have the capacity to run queries on its own?  
>>> Specifically, is it smart enough to go to an input page, select  
>>> from a pulldown, and run a query?
>>
>> _______________________________________________
>> FX.php_List mailing list
>> FX.php_List at mail.iviking.org
>> http://www.iviking.org/mailman/listinfo/fx.php_list
>
> _______________________________________________
> FX.php_List mailing list
> FX.php_List at mail.iviking.org
> http://www.iviking.org/mailman/listinfo/fx.php_list

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.iviking.org/pipermail/fx.php_list/attachments/20060920/c1b7fcc4/attachment.html


More information about the FX.php_List mailing list