I was able to get it using the following code I found in the "Returning Non-HTML Content" section of the "Sending Content to the Browser" web page (http://www.foxweb.com/document/index.htm?page=/document/SendData.htm):
<%
Response.Buffer = .T.
FileName = Request.QueryString("FileName")
FileExtension = UPPER(JustExt(FileName))
DO CASE
CASE EMPTY(FileName)
DO ReturnErr WITH "You must specify a file in the query string"
CASE NOT FILE(FileName)
DO ReturnErr WITH "Can not find file " + FileName
CASE INLIST(FileExtension, "XLS", "CSV")
ContentType = "application/vnd.ms-excel"
CASE FileExtension == "GIF"
ContentType = "image/gif"
CASE INLIST(FileExtension, "JPG", "JPEG")
ContentType = "image/jpeg"
OTHERWISE
DO ReturnErr WITH "Not authorized to download files of this type"
ENDCASE
FileContent = FILETOSTR(FileName)
Response.AddHeader("Content-Disposition", "inline; filename=" + FileName)
Response.AddHeader("Content-Length", LTRIM(STR(LEN(FileContent))))
Response.ContentType = ContentType
Response.Write(FileContent)
ENDPROC
PROCEDURE ReturnErr
PARAMETERS ErrMsg
Response.Clear
%>
<HTML><BODY><%= ErrMsg %></BODY></HTML>
<%
Response.End
ENDPROC
%>
|
Please add the following CASE to the above code:
CASE FileExtension == "PDF"
ContentType = "application/pdf"
|
In addition, I recommend that you cross-compare the DO CASE code above against the DO CASE code in Examples\Download.FWx and reconcile the differences.
TIA,
Art Bergquist
Sent by Art Bergquist on 05/06/2019 12:15:32 PM:
Thanks,
I'm experimenting with displaying the .PDF via an .FWx script that contains an <embed> control.
I also implemented the following in robots.txt:
# This disallows search engines from crawling the <name of folder to prevent web crawling in> subdirectory:
User-agent: *
Disallow: /<name of folder to prevent web crawling in>/
|
Thanks again,
Art
Sent by FoxWeb Support on 05/04/2019 01:25:21 PM:
The solution will depend on how you provide access to the static PDF files. Are the files served by FoxWeb scripts, as described in the Returning Non-HTML Content section of the Returning Content to the Browser page? If yes, then there's no problem. Just move these files to a different folder that is not accessible directly through your web server.
If on the other hand you link directly to the files, then you can't use the same authentication as you use for FoxWeb. In this case you either need to modify your code to serve the files through FoxWeb scripts, or you hide them behind randomly generated file names. Security by obscurity is generally a very bad idea, but it may be OK, depending on how sensitive the data you are trying to protect is. The problem with using these random names is that somebody with access could note the URL and pass it to an unauthorized person, or even worse post it somewhere, in which case web crawlers, including search engines, will discover them. You can control this to a certain extent with a robots.txt file, but do not rely on such a solution for truly confidential data.
FoxWeb Support Team
support@foxweb.com email |
Sent by Art Bergquist on 05/03/2019 10:01:36 PM:
Hi.
This afternoon, we discovered that someone had Google-d (it may have been a Googlebot) and ended up being able to access a .PDF file that is supposed to be accessible only after you log in to the non-anonymous part of our website.
Is there a way to prevent access to .PDFs in the non-anonymous (i.e., userid/password -protected) part of the website?
I have read through the Session Management FoxWeb web page and have protected all .FWx files in the non-anonymous part of the website from being directly accessed.
I'm trying to figure out, though, how to prevent direct access to .PDFs in the non-anonymous part of the website.
TIA,
Art Bergquist